top of page

From Cloud to Space: Optimizing Geospatial Machine Learning Models for Satellite Edge Computing

In the world of Satellite Edge Computing, one of the most significant challenges is optimizing on-ground models for deployment on a spacecraft.


Geospatial models that are built to run on cloud infrastructure are usually optimized for superior performance. Designing these models using satellite data is in itself a complex task that requires building multiple iterations with architectures comprising millions of parameters. Since there are no limitations on power and resources when running these models on cloud infrastructure, they can afford to be highly accurate, feature-rich, and scalable. However, this also means that these models are large and computationally intensive, requiring substantial memory, storage, and processing power.


Blog Title

When deploying these models in space, certain characteristics become significant limiting factors. The stringent constraints of spacecraft, such as limited cores, memory, and power allocation, make it impractical to directly use these large, resource-intensive models. As an example, typical on board compute platforms (DPU - Data Processing Units) rely on 10-50 watts of solar power for edge processing – compared to several kilowatts of available power consumed in data centers for similar operations. Additionally, necessary trade-offs must also be made between accuracy, efficiency, and inferencing time depending on the spacecraft’s mission requirements. And finally, the uplink bandwidth in the order of few 10s to few 100s bits per second is a big limiting factor to uplink new software to the satellites. 


Therefore, simply uploading a ground-optimized model to a spacecraft and running it is not feasible. Instead, these models must be meticulously optimized and utilized judiciously, ensuring they perform essential tasks efficiently and only when necessary. 


At Little Place Labs, we are experts in overcoming these challenges. We leverage various techniques such as Neural Architecture Search, Knowledge Distillation, and Quantization to optimize on-ground models, ensuring they operate efficiently with high accuracy within the constraints of spacecraft. In collaboration with the team at φ-lab (Philab) ESA, we are developing ultra-efficient convolutional neural networks for a demo mission we plan to fly this year. We are working on maintaining high accuracy models (90%+) while drastically reducing computational load and memory footprint.



We have successfully compressed our Machine Learning models by a factor 1/20 and are on track to reduce the model's size to further 1/50 or the original size. We continue to focus on developing state-of-the-art machine learning models that are optimized to operate within the constraints of spacecraft environments. From the ground up, our models are designed with the specific hardware limitations in mind, allowing us to fine-tune algorithms for the processing power, memory, and energy constraints typical of space conditions. This approach includes a continuous evaluation of trade-offs between accuracy and efficiency, ensuring our ML software performs reliably under space conditions. By conducting demo missions, we gain first-hand experience with the unique challenges and trade-offs, allowing us to develop the most efficient solutions for various use cases. Our expertise spans deploying the same machine learning model on a variety of Data Processing Units based on Xilinx, NVIDIA and Intel based AI accelerators and soon to be FPGA based CNN implementations. 


If you are excited about our research and would like to learn more, please connect with one of our research team members. 


1 Comment


S.Aasif Imam
S.Aasif Imam
Jun 25

Keep up the good work Param, really proud of you 🥰

Like
bottom of page