top of page
image.png

Physical general intelligence for general purpose robotics based on neuroscience

Panacea brings the neuroscience of physical general intelligence to autonomous robotics, empowering engineers to reimagine products and services.

VISION

To automate physical interaction tasks in general and overcome the limitations of state-of-the-art approaches, we imitate how cerebrum transform sensory signals into motor system control.

The human and animal cerebrum integrates information from hippocampal-entorhinal complex and somatosensory cortex into neural representation of body position in space, and integrates information from  visual cortex into neural representations of surrounding objects. 

 

The motor cortex computes trajectories of  physical interactions in a perceptual feedback loop, making the interaction happen.

Machine learning models in robotics are not capable to perform interactions in general, only specific ones.

Biological neural networks perform based on the perception of objects in space, whereas visual models in robotics are based on morphological approach, and  perceive objects in a more semantic rather than spatial sense.

Hierarchical processing of sensory signals in the visual cortex creates spatial representations of objects for transmission to the motor cortex, so interactions are based on spatial rather than semantic perception.

content_c5-Image-Courtesy-of-Human-Connectome-Project-modified.jpg

TECHNOLOGY

Using topology, we imitate visual spatial perception and motion planning by creating an agent-environment computational space.

TOPOLOGICAL PERCEPTION

We developed a topological approach to explore visual and spatial sensory data to extract topological manifolds, simulating spatial perception of an object in the visual cortex, to build the scene of surrounding environment .

1200px-Torus_knot_2.stl.png
—Pngtree—futuristic robot arms in 3d_3991231.jpg

INTEGRATED SPACE FOR AGENT AND ENVIRONMENT

We place the agent and surrounding environment in a integrated computational space based on topological relationships, which allow the models to learn the trajectory in a continuous feedback loop, similar to the processes that occur during physical interactions in the motor cortex.

COMPLEXITY OF INTERACTION

We use an integrated agent-environment computational space to create a motion language for planning complex behavior in physical interactions, a way to express simple interactions in a trainable form, and to separate the stages of multi-action interactions to lean complex scenarious.

67babb82-9cf8-4e62-9811-3c5a342578d6.jpg
thisisengineering-H4ClLKv3pqw-unsplash-modified.jpg

PRODUCT

We are building a platform to automate physical interactions in general, based on live demonstration.

We believe that live demonstration is the only efficient way to bypass the problem of hidden stages in complex scenarios that arises when using pre-trained learning approach. Thanks to topological perception, our model can generalize physical interaction tasks at the level of spatial understanding. And we use semantic recognition to generalize tasks at the level of semantic understanding. 

AREAS OF USE

A task that can be demonstrated can be automated.

A complex physical interaction that can be decomposed into a series of simpler interactions, which in turn can be demonstrated, can be automated by using our physical general intelligence platform.

To find out more about what we're building, please drop us a line at

  • X
  • Гитхаб

© 2025 Panacea Robotics.

Stay in the loop

To stay up to date with everything we do on general purpose robotics AI frontier, please subscribe to our newsletter.

Messsage send.

bottom of page