Github Get started

Why Meadowrun?

Meadowrun runs your Python code in the cloud, seamlessly. Scale up to a bigger machine, or scale out to many machines—once you are ready to run your analysis or program on a dataset that doesn’t fit on your laptop, Meadowrun helps you do that without hassle and while keeping many of the benefits of running locally.


What’s the problem?

That all sounds great, I hear you say, but don’t we already have containers, AWS/Azure/GCP, Kubernetes and Dask/RAPIDS/Spark? Indeed we do, but putting those together while ensuring an efficient development workflow is no walk in the park.

For example, containers are very useful for running on scalable compute infrastructure like Kubernetes, but locally you’re more likely to have a virtual environment with third-party packages installed, and you’ll certainly have some local code you’d like to run. How do you get that in the container image efficiently and reproducibly? And if you do get a container running, how do you see the logs? If there are any results how are they sent back?

Another source of complexity is the wide array of options to choose from, even within a single cloud provider like AWS. Should you allocate an EC2 instance, and if so, should it be an on-demand instance or a spot instance? Can you use Lambdas, or maybe Elastic Container Service?

Finally, distributed compute clusters like Dask can be a piece of the puzzle, but require setup and management, and again the deployment problem—how do I get my code and dependencies to the cluster workers—is not straightforward to solve.


How does Meadowrun help?

Meadowrun targets Python analytics users who are familiar with notebooks and interactive development, as well as developers who use command line tools like pytest. Example use cases are offloading compute-intensive analyses or long regression tests.

Our guiding principles for Meadowrun are:

Github Get started

Get in touch! Join the chat on Gitter or email us