Dataiku supports all kinds of users, whether they prefer to leverage the visual point-and-click interface or work entirely in code. But just because Dataiku has a simple-to-use graphical user interface doesn't mean we've skimped out on robust features for more technical profiles. This blog post will detail a few of the highlights Dataiku has to offer for data scientists, engineers, architects, and other profiles who may prefer to work with code — rather than visual tools — to manipulate, transform, and model data.
Use Your Favorite Languages and Tools
Dataiku allows you to work with the tools you already know and love without sacrificing collaboration with other team members, who may or may not be using the same set of tools.
- Create code recipes in the language of your choice, including Python, R, SQL, and more.
- When developing code directly in Dataiku, use the built-in code editor, the embedded Jupyter Notebook interface, or even code in an external IDE such as VS Code, PyCharm, Sublime Text, or R Studio.
- If you already have Jupyter Notebooks that have been developed outside of Dataiku, you can upload those Notebooks manually, connect to a remote Git repository, and use the typical branching, push, and pull actions to keep your code in Dataiku synced with that remote repository.
- Dataiku includes built-in algorithms from state-of-the-art machine learning libraries, such as Scikit-Learn, MLlib, and XGboost, plus TensorFlow and Keras for deep learning. But, you can also code your own custom models and still take advantage of all the benefits Dataiku Visual ML has to offer, such as automatic experiment tracking and diagnostics, interpretability and performance metrics, auto-documentation, and ease of version monitoring in production.
Easily Reuse & Share Code
Easy reuse and sharing of code, datasets, and other assets in Dataiku helps teams reduce inefficiencies and inconsistent data handling while simultaneously empowering less technical users to go further on their own.
Dataiku project libraries are a great way that teams can centralize and share code both within and across projects. While Dataiku comes pre-loaded with starter code for many common tasks, you can easily add your own code snippets for you and your team to use.
Goodbye, Complexity; Hello, Efficiency
Dataiku abstracts away and simplifies layers of complexity related to connecting to data and configuring compute resources. For example, data scientists can seamlessly execute their code in a containerized, distributed way using Spark or Kubernetes clusters — select the runtime environment you want, and Dataiku takes care of spinning up the containers and shutting them down when the job is done.
When it comes to model deployment, with just a couple clicks, you can expose models as a RESTful API service to incorporate its outputs into other data pipelines, visualizations, or applications.
“Dataiku serves as the best-in-class flow-based machine learning platform that I have seen so far. IT complexities of launching autoscaling CPU/GPU compute, finding proper credentials, access to database and other clustered is now streamlined via a standard Dataiku interface, thus lifting productivity of our group.”
— Senior Data Scientist in the Healthcare Industry (source: Gartner Peer Insights)