After a pandemic-driven cloud adoption boom in the enterprise, costs are finally coming under a microscope. More than a third of businesses report having cloud budget overruns of up to 40%, according to a recent poll by observability software vendor Pepperdata. A separate survey from Flexera found that optimizing the existing use of cloud services is a top initiative at 59% of companies — cost being the main motivation.
An entire cottage industry of startups has sprung up around optimizing cloud compute. But one in the race, Sync Computing, claims to uniquely tie business objectives like cost and runtime reduction directly to low-level infrastructure configurations. Founded as a spinout from MIT’s Lincoln Laboratory, Sync today landed $12 million in a venture funding round (plus $3.5 million in debt) led by Costanoa Ventures, with participation from The Engine, Moore Strategic Ventures and National Grid Partners.
Sync co-founders Jeff Chou and Suraj Bramhavar both worked as members of the technical staff at the MIT Lincoln Laboratory prior to launching the startup. Bramhavar came to MIT by way of a photonics research position at Intel, while Chou co-founded another startup — Anoka Microsystems — designing a low-cost optical switch.
Sync was born out of innovations developed at the Lincoln Lab, including a method to accelerate a mathematical optimization problem commonly found in logistics applications. While many cloud cost solutions either provide recommendations for high-level optimization or support workflows that tune workloads, Sync goes deeper, Chou and Bramhavar say, with app-specific details and suggestions based on algorithms designed to “order” the appropriate resources.
“[We realized that our methods] can dramatically improve resource utilization of all large-scale computing systems,” Chou told TechCrunch in an email interview. “As Moore’s Law slows down, this will become a key technological choke point.”
Chou claims that Sync doesn’t require much in the way of historical data to begin optimizing data pipelines and provisioning low-level cloud resources. For example, he says, with just the data from a single previous run, some customers have accelerated their Apache Spark jobs by up to 80% — Apache Spark being the popular analytics source engine for data processing.
Sync recently released an API and “autotuner” for Spark on AWS EMR, Amazon’s cloud big data platform, and Databricks on AWS. Self-service support for Databricks on Azure is in the works.
“The launch of our public API will allow users to programmatically apply the Sync autotuner to a large number of jobs and enable continuous monitoring of [cloud environments] with custom integration,” Chou said. “The C-suite cares about managing cloud computing costs, and our Sync autotuner does this while also accelerating the output of data science and data analytics teams … The product also allows data engineers to quickly change infrastructure settings to achieve business goals. For example, one day, teams may need to minimize costs and de-prioritize runtime, but the next day, they may have a hard deadline, therefore needing to accelerate runtime. With Sync, this can be done with a single click.”
Sync first applied its technology inside MIT’s Supercomputing Center before working with larger government high-performance compute centers, including the Department of Defense — with which it has a $1 million contract. Now, Sync says it has roughly 300 registered users on its self-service app and “several dozen” design partners testing and providing feedback, including Duolingo and engineers at Disney’s Streaming Services group.