Ben Bolte
K-Scale FAQ
Published on
Reading time
11 min read

K-Scale FAQ

Frequently asked questions about K-Scale.

What is K-Scale?

K-Scale is a startup based in Palo Alto, California, that is designing open-source humanoid robots. We started in early 2024 by building a 3D printed humanoid robot called Stompy. We partner with companies in China and Southeast Asia, providing the software and hardware designs required to make useful, intelligent robots.

We believe that the best path towards mass adoption of embodied intelligence is to build in the open. Read more about our company philosophy here.

Currently, we are building two robots:

  • K-Bot, a full-size robot similar to Tesla's Optimus or Figure 02.
  • Z-Bot, a tabletop robot oriented towards enthusiasts and hackers.

Our robots run on a common operating system, KOS, which exposes a common API to downstream clients. We are focused exclusively on running end-to-end neural networks and Software 2.0 development.

What will be the first application for your robot?

Based on market data, the first significant revenue stream for our robots will likely come from educational and enthusiast markets. We developed Z-Bot as an entry-level product to address this market while letting us make progress on our core machine learning tools and shared software ecosystem, including ksim, kos-sim, and KOS, without committing exorbitant capex towards research and development.

Most consumer robots today fail because they are limited, pre-programmed toys that lose appeal after a few hours, so the key opportunity for bringing a new product to market comes from our ability to bring cutting-edge machine learning models into an affordable platform. While other robots in the same product category suffer from the “one and done” problem and end up just sitting on a shelf, Z-Bot is fully programmable in Python and comes with the same sensor and actuator suite found on much more expensive robots.

By being first to market with an affordable, general-purpose humanoid, we aim to set the standard for human-robot interaction, establish early brand affinity, provide a testbed for data collection and iterative model improvement, and initiate a data flywheel to scale our machine learning models.

By executing well on the Z-Bot, we hope to de-risk our longer-term vision of a full-scale humanoid platform that can take on real-world work.

Who are your main competitors?

We compete directly with Tesla and Figure in humanoid robotics. But the real competition isn’t just other robotics startups, it’s the industries humanoid robots will disrupt.

The biggest market opportunities for embodied AI are in jobs that are dull, dirty, or dangerous. Many of these industries are already struggling with rising labor costs and shortages, making them primed for automation.

Most automation solutions for these industries today are:

  • Too expensive to scale.
  • Too specialized to adapt.
  • Too complex to deploy.

That’s where we believe that a general-purpose humanoid which learns from experience will be the lowest-friction way to approach these markets at scale, without the need for expensive hardware or engineering resources custom-tailored for each task.

How do you plan to differentiate from your competitors?

We view ourselves as less of a research lab and more of a product company. Since we expect embodied AI methods to advance rapidly over the next few years, our primary focus is on having a deep understanding of our core hardware and software infrastructure, and being able to quickly adapt state-of-the-art methods to our own robots to achieve the best possible performance.

Many other humanoid robot companies started from a hardware-first mindset, treating the problem as a mechanical engineering challenge. That makes sense - building a humanoid robot is hard - but we think the real long-term differentiator won’t be hardware. On the flip side, there are a number of companies which started from a pure-software perspective, focusing on building a common “operating system” for lots of embodiments. Our approach has been to work at the intersection of the two problems and view the problem holistically, with the end goal of bringing a product to market as quickly as possible.

What can your robot do?

The goal is simple: we’re building the ultimate interface for AI models to interact with the world. In the limit, you will be able to ask it to do anything a human would do previously and it will just do it.

To get there, we’re focused on three core capabilities:

  1. Locomotion - Moving efficiently and stably in real-world environments
  2. Manipulation - Interacting with objects as gracefully as a human can
  3. Perception - Understanding the world well enough to navigate and take meaningful actions

These aren’t separate engineering problems, but rather different facets of the same Software 2.0 system. The key insight is that you don’t want to hand-engineer every little behavior or rely on brittle, task-specific pipelines. You want a model that takes in raw sensory data - video, audio, and proprioception - and directly outputs actions. This approach is the right way to create a data flywheel, since the raw data from each robot can be fed directly into improving the underlying model. This is the same future-forward approach that Tesla takes with self-driving.

Why use legs instead of a wheeled base?

The humanoid form is the end state of robotics, and in the long term, the vast amount of value from robotics will be captured by the humanoid form factor. The wheeled base is an intermediary to bypass important robotics and ML issues. We have been already able to bring the price of our robot down into competition with other simpler robot form factors, and expect to reduce it further moving forward. Along the way, we have also solved the needed ML locomotion challenges and actuator control using approaches that we expect will scale much more effectively.

On top of this, we expect that there will be a long-term cost advantage to using legs. Legs and arms share the same actuators, sensors, and control algorithms, reducing complexity and manufacturing costs as production scales. By focusing on a single form factor, we avoid splitting resources across different architectures and instead concentrate on perfecting humanoid mobility. Once you solve bipedal locomotion at scale, you unlock automation for every environment designed for humans, without retrofitting the world for robots.

How will you handle manufacturing and supply chain challenges?

By designing our robot ourselves using COTS components, we’ve kept maximum flexibility in choosing the best hardware partners while avoiding the usual trade and manufacturing bottlenecks that come with outsourcing to an ODM. This gives us far more control over cost, quality, and supply chain resilience.

Open-sourcing our hardware has also been a huge advantage. Instead of chasing down suppliers, they found us, because they could immediately see the components we’re using and offer competitive solutions. This has dramatically streamlined sourcing, helping us move faster and more efficiently than typical hardware startups.

Working with Asian suppliers across time zones is a notorious challenge for startups, but we’ve been able to navigate this far more effectively by keeping our hardware transparent and straightforward. The result: a Figure-level robot in four months, with just five engineers, on an R&D budget under $500K.

That level of speed and capital efficiency is unheard of in this space - and it’s exactly why our approach works.

How will you achieve profitability?

Most humanoid robots today cost well over $100K, which is a non-starter for mass adoption. In contrast, K-Bot has a total BOM under $9K while maintaining comparable battery life and actuator strength to more expensive robots. As supply chains mature and more upstream components reach economies of scale, we expect costs to drop even further, without reducing performance.

We got here by taking a fundamentally different approach:

  • Leveraging existing manufacturers instead of reinventing the wheel.
  • Riding the wave of a rapidly maturing supply chain rather than custom-building expensive, niche components.
  • Being ruthless about design simplicity, following the principle that the best part is no part, to keep costs down while ensuring the hardware is fully optimized for end-to-end neural network control.

By starting with a low-cost, mass-manufacturable design, we avoid the typical robotics trap of building expensive hardware first, then scrambling to cut costs later, often at the expense of performance. Instead, our focus can stay 100% on rapidly improving software capabilities on top of hardware that we already know will scale. We believe this is the most reliable path to long-term profitability.

What are your key milestones over the next 12-24 months?

Our primary focus over the next 12-24 months is rapidly improving our robot’s software capabilities to reach true product-market fit. Hardware alone isn’t enough - what matters is getting to a point where the software is advanced enough to deliver real-world utility at scale.

To that end, our goal is to reach a break-even point for both of our robots where the cost of the robot is less than the value it provides for some large fraction of people. While the field is advancing rapidly, our current focus is on:

  1. Incorporating our efficient VLA model into our robot platform
  2. Building the world’s best RL and sim2real toolkit
  3. Kickstarting our data feedback loop

When our robot capabilities are sufficiently advanced, we believe we have a design that we can quickly scale using existing mass manufacturing processes.

How do you plan to make your robot safe?

True safety comes from intelligence. We take the same approach as Tesla’s FSD - the best way to ensure real-world safety isn’t by hardcoding rules, but by continuously improving the robot’s intelligence through real-world feedback. A robot that understands its environment, predicts risks, and adapts its behavior accordingly is far safer than one relying on static, pre-programmed safety measures.

In addition to our intelligence-first approach, we also design the hardware for safety in a few key ways. We use low-inertia, back-drivable actuators, which naturally reduce the chance of injury - both to people and to the robot itself - by allowing for smooth, compliant movement instead of rigid, forceful actions. Our models work with damped PID control, meaning that the actuators behave like springs. The robot is also slightly shorter than the average human, keeping it compact enough to minimize risk while still being tall enough to be useful.

If you're open-source, how do you plan to make money?

Our business model is simple: we think that the value of humanoid robots lies not in the hardware, but in creating the first viable general-purpose physical AI system that can work in the real world.

The hardware will inevitably become commoditized, just as the smartphone did. The critical path to value is creating the first robot that can reliably perform useful tasks in human environments with minimal friction.

Our edge comes from moving very quickly on three insights that the industry hasn't fully internalized:

  1. Intelligence development, not manufacturing scale, is the current bottleneck. Without solving the core intelligence challenges, humanoid robots are just limited machines, but by solving these challenges, even low-end COTS hardware can become extremely valuable. We've built our hardware platform to be mass-manufacturable from day one, but as a company, we're focused on making the software genuinely useful before scaling production.

  2. End-to-end learning systems are the only path to generalized capability. The traditional robotics approach of hand-engineering behaviors for specific tasks doesn't scale to the complexity of human environments. Our approach of building models that directly map sensory inputs to physical actions through massive datasets and real-world experience is fundamentally more capable of generalizing across diverse environments and tasks.

  3. Open development dramatically accelerates the intelligence feedback loop. By open-sourcing our platform, we gain an insurmountable data and iteration advantage. Companies like DeepSeek have already demonstated the numerous advantages to building advanced AI systems in the open. Every deployment generates valuable training data, every contributor solves problems we wouldn't have encountered alone, and every integration partner becomes invested in our ecosystem's success.

While giants can outspend us on manufacturing infrastructure, they're constrained by their closed-development approach and the need to validate against internal assumptions. Our open model naturally prioritizes real-world utility over demos that look good but don't generalize.

We also don’t see open-sourcing as a business risk. Historically, AI and robotics companies haven’t failed because competitors copied their work; they failed because they couldn’t execute fast enough to reach product-market fit. Self-driving startups burned billions chasing L5 autonomy and never shipped. The real risk isn’t sharing your work - it’s moving too slowly.

Our near-term revenue comes from educational markets with Z-Bot, software licensing, and design services — but the long-term value is in establishing the definitive platform for embodied AI as this market explodes. When your robot can actually perform useful work across multiple environments without expensive custom engineering, the economics become undeniable. This is the approach that will capture the massive economic potential of physical AI in the $100 trillion global labor market.

Previous Article