VISION
To automate physical interaction tasks in general and overcome the limitations of state-of-the-art approaches, we imitate how cerebrum transform sensory signals into motor system control.
The human and animal cerebrum integrates information from hippocampal-entorhinal complex and somatosensory cortex into neural representation of body position in space, and integrates information from visual cortex into neural representations of surrounding objects.
The motor cortex computes trajectories of physical interactions in a perceptual feedback loop, making the interaction happen.
Machine learning models in robotics are not capable to perform interactions in general, only specific ones.
Biological neural networks perform based on the perception of objects in space, whereas visual models in robotics are based on morphological approach, and perceive objects in a more semantic rather than spatial sense.
Hierarchical processing of sensory signals in the visual cortex creates spatial representations of objects for transmission to the motor cortex, so interactions are based on spatial rather than semantic perception.

TECHNOLOGY
Using topology, we imitate visual spatial perception and motion planning by creating an agent-environment computational space.
TOPOLOGICAL PERCEPTION
We developed a topological approach to explore visual and spatial sensory data to extract topological manifolds, simulating spatial perception of an object in the visual cortex, to build the scene of surrounding environment .


INTEGRATED SPACE FOR AGENT AND ENVIRONMENT
We place the agent and surrounding environment in a integrated computational space based on topological relationships, which allow the models to learn the trajectory in a continuous feedback loop, similar to the processes that occur during physical interactions in the motor cortex.
COMPLEXITY OF INTERACTION
We use an integrated agent-environment computational space to create a motion language for planning complex behavior in physical interactions, a way to express simple interactions in a trainable form, and to separate the stages of multi-action interactions to lean complex scenarious.


PRODUCT
We are building a platform to automate physical interactions in general, based on live demonstration.
We believe that live demonstration is the only efficient way to bypass the problem of hidden stages in complex scenarios that arises when using pre-trained learning approach. Thanks to topological perception, our model can generalize physical interaction tasks at the level of spatial understanding. And we use semantic recognition to generalize tasks at the level of semantic understanding.
AREAS OF USE
A task that can be demonstrated can be automated.
A complex physical interaction that can be decomposed into a series of simpler interactions, which in turn can be demonstrated, can be automated by using our physical general intelligence platform.





















