Production scheduling

AI-based solution

We are able to construct and deliver customized production scheduling, based on standard heuristics and various AI solutions,  that will help you allocate system resources in a smart way in order to save time & money

Production line simulation & job scheduling

A common practice in production scheduling

  • Manual methods performed by human planners but there is a higher risk of an error.
  • Heuristic algorithms integrated into complex software tools such as First In First Out (FIFO), Shortest Processing Time (SPT), Shortest Setup Time (SST) etc. may find good solutions but can omit other unique and unexpected solutions.
  • Global optimization approaches such as genetic algorithms could find better solutions but are time-costly.

Reinforcement learning-based solutions

  • RL agent can discover unique and unexpected solutions.
  • This approach can perform more effectively in searching the space of possible actions slightly in a similar way to some global genetic optimization algorithms.
  • It takes more time to learn the agent but when ready, RL agents can generate very quickly new production schedules.
  • RL is not suitable for every case but generally very likely to overperform classical methods.

Our solutions can increase your manufacturing bandwidth by

5-20%

Reinforcement learning (RL)

  • powerful Machine learning approach using SARSA (State-Action-Reward-State-Action)
  • learning the agent in simulation loop of huge amount of actions and continuous recording of Rewards/Penalizations from which the agent learns and improves the next decisions
  • RL agent ~ digital twin of a real human schedular
Reinforcement learning

A schematic simplified description of RL agent in real production

At first, before deploying in real production, the RL-based agent needs to learn the behaviour of the production system to be able to control it.

For that, the agent needs to be provided with a sufficient amount of information about the system before every next decision (amount of stored resources (components), list of orders, occupation of workplaces, available workers, production machines etc).

It is done through the simulation loop where the agent learns what decisions (a.g. assign production of order X to a workplace A) are a good choice and which are a bad choice always based on the current production status – Agent performance assessment. Assigning to production some order, for which there are no components ready in the warehouse leads to a PENALIZATION for the agent. Finishing order on time leads to a REWARD.

RL schematic

Once, the RL agent is learned (optimized), i.e. it provides optimal decisions to fulfil given objectives, it can be deployed into the real production.

RL schematic 02

get in touch

Czech Republic

K Hnízdům 221/7, 301 00 Plzeň, Czech Republic

SOUTH KOREA

Rosedale Bld. #1632, 280 Gwangpyeong-ro, Gangnam-gu, Seoul, Korea