How to implement data processing in the IoT?

Diplômé en informatique industrielle (Master Pro Automatique, Electronique et Informatique Industrielle) à l'Université de Caen, Paul travaille au sein de l'entreprise Robert BOSCH (France) depuis 2017. Expérimenté en gestion de projet et en développement logiciel dans le domaine de l'électronique embarquée. Il encadre actuellement une équipe dédiée au développement au sein du bureau de R&D Bosch Mondeville. Cette fonction lui permet aujourd’hui de bien cibler les besoins clients et de les accompagner tout au long de leur projet.

Until recently, data was extracted for use in the event of a problem and for maintenance purposes. The feedback and learning that could be gained from the data collected was often only done at the product design stage. With the rise of Big Data and artificial intelligence, it is desirable for this data to be usable to make the service around the product evolve in a dynamic and continuous way.

At the same time, the evolution of the number of objects communicating in the world shows how crucial it is to define the needs so as not to overload the existing networks, to size the infrastructure properly, while not overlooking the security of the data and the environmental impact.

Governance and value chain

Data governance is thus designed to define a procedure and organisations that must be put in place by companies. This is done in order to control the collection and use of data. It is essential to ensure its security while defining the legal framework around it.

The need to collect, process and then add value to this data can only be achieved with a well-controlled value chain, i.e. one that has been thought through when designing the ecosystem around the service.

All the “states” of the data are synthesised by the links in a value chain.

This illustration highlights that only the acquisition phase will necessarily be in the IoT product. The rest can be in a gateway (edge computing) and/or in the cloud (cloud computing).

Possible locations of the treatment

Classically, there are two possible locations for data processing. As close as possible to the collection, what is known as Edge computing or remote in the cloud. These two approaches do not lead to the same problems for the IoT product, in terms of the cost of processing the data (computing power, etc.) and making it available (connectivity, etc.).

Edge computing is close to the sensor acquiring the data, either in the MCU embedded in the product or in the gateway allowing the product to communicate with the Internet. It is part of the desire to distribute computing and limit the amount of data to be uploaded to the Cloud. The multiplication of objects tends to make this solution more relevant than complete processing in the cloud.

Cloud computing is by definition remote from the product. It allows you to never be limited in terms of computing power. However, it will require high bandwidth connectivity in order to retrieve data that has been little or not processed.

Hybrid solutions exist, such as in the automotive industry where cloud computing is embedded in the vehicle; this approach is especially valid in the early design phase where a very large amount of data is available.

The choice of location

Thinking about the location of each link in the chain should be based on the following 4 main elements :

  • The energy consumed and therefore the constraint of its storage (battery, energy recovery…)
  • The amount of data to be carried which imposes a constraint on the choice of connectivity.
  • The tariff and subscription methods for getting the data to where it will be valued.
  • Data security issues (confidentiality, integrity and availability).


As the type of data and its processing are intrinsically linked to the business to which they refer, it is not counterproductive to see the processing method evolve in two distinct phases, i.e. not having the same infrastructures in play.

A first phase, known as the learning phase, where all the data is brought back for analysis and development. This is part of the design phase where potentially the business model around the data will become visible.

A second phase, known as the product life phase, where only minor changes can be made. Normally, it is not possible at this level to make major changes to the infrastructure at the risk of damaging the business model. Only software updates are recommended.

The first phase can always be kept in parallel with the second to help the smooth evolution of the products.

There are therefore different solutions for the data processing chain. In order to carry out this task successfully, it is essential to take into account all the aspects mentioned above from the product design phase onwards, without overlooking the detailed analysis of each of the links in the value chain.

At Bosch Mondeville, our experts can support and advise you on the technical aspects of data processing.

Suscribe to the newsletter
Subscribe now to our free newsletter:
this is the easiest way to get the latest information about Bosch Mondeville.