Get in touch
contact@minds.ai
 
U.S. 
101 Cooper St. 
Santa Cruz, CA 95060
 
India
Minds Artificial Intelligence Technologies Pvt. Ltd.
1st Floor, Anugraha, 174, 19th Main Rd,
Sector 4, HSR Layout,
Bengaluru, Karnataka, 560102
 
Europe
Amsterdam, the Netherlands

DeepSim: automated controller design

(A Deep Reinforcement Learning based platform for designing embedded or cloud based controller software for a variety of applications - from electromechanical systems to business processes)

What is a controller ?

The traditional definition of a controller is, a module in a feedback loop with a system which is designed to achieve optimal performance. As illustrated in the left panel of the figure below, the output of the system (y) is fed back as an error term (e) to the controller. The error is calculated by comparing it with a reference signal (r) . The controller tries to drive the error signal to zero by controlling its output (u). 

In the context of Reinforcement Learning systems the controller from traditional systems becomes the “agent” in RL terminology as illustrated in the right panel of the figure below. The reference and error signals from traditional controllers are replaced by the “reward” function (Rt).

Figure 1.  On the left a traditional controller on the right a reinforcement learning system training the controller.

What is DeepSim ?

 

DeepSim is a versatile platform which can be used to generate controller software for almost any optimization problem where a simulator is available. In the absence of a simulator the platform can also use historical data about the environment being controlled. Hardware in the Loop (HIL) systems that are robust enough to allow for some trial and error can also be used to train the controller.

 

DeepSim can be used to generate a SW controller for any system with sensors and actuators. The sensors are inputs to the controller and the controller decides how to trigger the actuators so that optimum performance can be achieved when the system is operational. Optimum performance is specified by the designer during the RL training process using the “reward function”. 

DeepSim can also be used to generate intelligent agents for business process optimisation tasks. In this case the inputs to the agent could be the detailed information about the current state of the business process and the output could be a recommendation about what action to take next. The agent could be trained on a simulation of the process or historical data and logs to optimise for a certain outcome as specified in the “reward function”. Very sophisticated reward functions can be specified that try to optimise for complex tradeoffs.  

 

Why do our customers need DeepSim ?

Today we are witnessing the convergence of the following technology trends:
 

  1. Autonomous driving: land, water and sky.

  2. Electrification: alternative energy sources.

  3. Smart environments: intelligent edge devices enabling automation of homes to factories and cities.

  4. Big data: availability of big data from multitude of sensors and large networked systems 

 

These technology trends are driving the need for AI agents which can perform sophisticated decision making, acting autonomously and often in real time. Electrification and autonomy and driving the emergence of complex and highly variable designs e.g. today there are more than a 1000 different drone designs that are in commercial production. Design of vehicles is also undergoing fundamental changes not seen since the invention of internal combustion engines. And finally these smart environments and vehicles are performing ever more complex and diverse jobs which are too dangerous or even impossible for human agents. 

The availability of big  data from a multitude of on device sensors (traffic, enterprise business networks, etc.) enables the autonomous design of intelligent controllers using DeepSim. Such sensor data is used for training our models to make efficient decisions in a very high dimensional space. With traditional methods, many inputs and outputs remain unoptimized due to the complexity being beyond human capacity.

 

As a result of the above, the manufacturers of these vehicles and factories are facing an enormous challenge to rapidly design, test and deploy smart controllers for optimal control under diverse and challenging circumstances. Such controllers are very complex:

  • Need to monitor a large number of input sensors

  • Based on inputs need to drive actuators (make decisions) for optimising various performance criteria like, range, battery life, passenger / load safety, etc.

  • Need to be updated regularly to deal with new designs and changing environments and use cases.

 

DeepSim is created to offer a comprehensive platform to our customers for solving the above challenges and creating a fundamentally new methodology for auto generation of embedded control software. 

 

In summary, these are the limitations of traditional methods for SW (embedded) controllers: 

  • Difficult to model when transfer functions are not defined

  • Models become unstable when the control space becomes large

  • Heuristics require a lot of manual labor


 

Benefits of Deep Reinforcement Learning based controllers:

  • A system modeled by layers of linear combinations followed by non-linear activation functions that can represent,

    • Linear systems

    • Logical systems alone or in combination with a linear system 

    • Time series via techniques such as LSTM and RNNs 

  • Can handle large input and output space dimensionality

  • Do not require knowledge about transfer function or heuristic models

  • Can all be taught with the help of training examples


 

DeepSim: applicability and use cases

 

The following figure illustrates the wide applicability of the DeepSim platform. 

In the above figure the x-axis represents the frequency with which the agent has to make decisions, from fast (milliseconds, on the left)  to slow (hours or days, on the right). Fast agents are typically deployed in embedded real time controllers such as vehicles and drones. Slow agents are typically human scale and can be located as a SaaS server in the cloud that can be queried to make decisions.  

The y-axis represents the complexity involved in the agent's decision making process. Simple agents with low complexity typically have to monitor few ( < 10) input signals and control a few actuators ( < 5) , this is the lower region of the plot. On the high side of the y-axis we show highly complex systems with 10 - 100s of input signals and dozens of actuators or output signals. 

DeepSim like tools are urgently needed for applications in the upper two and lower left quadrants. DeepSim has been designed to serve this market need and to be efficient in those three quadrants.  An overview of verticals where DeepSim is and can be applied can be seen in the figure below.

DeepSim: use case examples

DeepSim improves hybrid car range

with a software update

For electric and hybrid vehicle manufacturers the top key performance indicator (KPI) is range. The minds.ai DeepSim platform was used to create controller software which managed the power source (IC engine vs battery). One mode maximized economy and the other maximized performance. Both modes resulted in less fuel consumption and higher battery charge vs. default values.

 

DeepSim RL-Builder uses deep reinforcement learning to create embedded controller software (DeepSim Solutions) to improve performance across many use cases and verticals while also reducing R&D time.

For more details see the project brief.