Meadowrun runs your Python code in the cloud, seamlessly. Scale up to a bigger machine, or scale out to many machines—once you are ready to run your analysis or program on a dataset that doesn’t fit on your laptop, Meadowrun helps you do that without hassle and while keeping many of the benefits of running locally.
What’s the problem?
That all sounds great, I hear you say, but don’t we already have containers, AWS/Azure/GCP, Kubernetes and Dask/RAPIDS/Spark? Indeed we do, but putting those together while ensuring an efficient development workflow is no walk in the park.
For example, containers are very useful for running on scalable compute infrastructure like Kubernetes, but locally you’re more likely to have a virtual environment with third-party packages installed, and you’ll certainly have some local code you’d like to run. How do you get that in the container image efficiently and reproducibly? And if you do get a container running, how do you see the logs? If there are any results how are they sent back?
Another source of complexity is the wide array of options to choose from, even within a single cloud provider like AWS. Should you allocate an EC2 instance, and if so, should it be an on-demand instance or a spot instance? Can you use Lambdas, or maybe Elastic Container Service?
Finally, distributed compute clusters like Dask can be a piece of the puzzle, but require setup and management, and again the deployment problem—how do I get my code and dependencies to the cluster workers—is not straightforward to solve.
How does Meadowrun help?
Meadowrun targets Python analytics users who are familiar with notebooks and interactive development, as well as developers who use command line tools like pytest. Example use cases are offloading compute-intensive analyses or long regression tests.
Our guiding principles for Meadowrun are:
- We want to make Meadowrun feel like you’re running locally, but with all the benefits of the cloud. You shouldn’t have to worry about containers, conda packages, logs and so on, while enjoying virtually unlimited scale and elasticity. The experience we’re striving for is being able to run Python code like it’s SQL. We’re seeing more data and analytics workloads run on the likes of Snowflake and other cloud data warehouses. They offer a SQL interface which means you don’t have to worry about packages, version conflicts or deploying your code. You also get easy and automatic parallelism and elasticity. We want to offer that same simple experience with the full power and flexibility of Python.
- We want to empower users to make decisions that work for them, instead of imposing or restricting choice. Meadowrun targets the main cloud providers, as well as on–prem Kubernetes, and we want to be transparent in the choices it makes—you are in control.
- Meadowrun should need very little setup and ongoing maintenance. To that end, we’ve designed Meadowrun to be serverless—there are no cluster managers, services or databases to set up. Pick a cloud provider, run a simple command and you’re good to go.
- Last but not least—Meadowrun values your time. Interactive development and short feedback loops are important. Meadowrun doesn’t keep you waiting.