How to maximize the efficiency of IoT projects

Significantly improve the efficiency of IoT projects

Author: Matt Gordon; Thom Denholm, Silicon Labs ( also known as "Silicon Labs')

 

If you only study the latest microcontroller datasheets, developers will easily think that efficient use of CPU resources ( including memory and clock cycles ) is a minor issue in current hardware design. The latest 32 -bit MCUs provide flash and RAM allocation in embedded space , which was unheard of not too long ago; and its CPUs run at the same speed as previous desktops. However, those who have recently developed IoT products know that advances in these hardware are not groundless; they have been dramatically changing in response to end-user expectations and design requirements. So now, more than ever, developers need to make sure their software runs at maximum efficiency and use their time effectively.

 

Software running in modern embedded systems tends to come from a variety of sources. Code written by application developers is often combined with off-the-shelf software components of RTOS ( Real Time Operating System ) providers, which in turn may utilize driver code originally provided by semiconductor companies. Developers can write each piece of code to optimize efficiency, but this article focuses more on efficiency optimization in off-the-shelf software components. In particular, two of these components will serve as a basis for reviewing the resource efficiency given in this article: the real-time kernel and the transactional file system . Silicon Labs (also known as " Technology " ) will use this technical article to explain how to maximize the efficiency of IoT projects. Please click " Read the original " to view the full article.

Real-time kernel: the core of an efficient system

The real-time kernel is at the heart of the software that runs on many embedded systems today. Simply put, the kernel is a scheduler; developers who write application code for kernel-based systems divide the code into tasks, and the kernel is responsible for scheduling those tasks. Then, the kernel is an alternative to the infinite loop in main() , which is often used as the primary scheduling mechanism in bare-metal embedded systems. Using real-time kernels provides important benefits, including increased efficiency. Developers who choose to base their application code on the kernel can optimize the use of processor resources in their systems while making more efficient use of their time. However, not all kernels are born the same, so simply deciding to use the kernel in a new project does not guarantee an increase in efficiency.

 

Scheduling is a key area where the kernel may be different and the efficiency of CPU resources can vary greatly. By providing an intelligent scheduling mechanism that allows tasks to run in response to events, the kernel can help developers improve efficiency in an infinite loop; where tasks ( or in other words, functions ) are executed in a fixed order . The exact efficiency of a kernel-based application depends in part on how its scheduler is implemented. A kernel scheduler ( just a piece of code that is responsible for determining when each task runs ) is ultimately an overhead that must not be eroded by the benefits of getting rid of bare metal systems.

 

Typically, in the real-time kernel, scheduling is based on priority, which means that application developers assign priorities ( usually numbers ) to their tasks , and the kernel supports higher-priority tasks when making scheduling decisions. . Under this mechanism, the kernel must maintain some type of data structure, track the priority of different tasks of the application, and the current state of each task.

 

An example obtained from Micrium 's C/OS-II kernel is shown in Figure 1 . In OSRdyTbl [] , here is an 8- element array ( eight bits per element ) , each bit representing a different task priority; where: the least significant bit of the first element corresponds to the highest priority; the last element The most significant bit indicates the lowest priority. The bit value of the array reflects the status of the task: if the task with the relevant priority is ready, it is represented by the value 1 ; if the task is not ready, it is represented by 0 . As part of the C/OS-II scheduler, the accompanying OSRdyTbl[] is a single octet variable shown in the figure - OSRdyGrp .

Figure 1 : In the μC/OS-II scheduler, each task priority is represented by a bit in the array

 

Each bit in the variable represents an entire row or element in the array: 1 bit indicates that the corresponding row has at least one ready task; 0 indicates that the row's tasks are not ready. By first scanning OSRdyGrp and then scanning OSRdyTbl[] using the code shown in Listing 1 , C/OS-II can determine the highest priority task that is ready to run at any given time. As shown in the list, this operation is very efficient and requires only two lines of C code. Of course, compact, efficient code is just one of the features that developers are looking for in the kernel. Since most new MCUs provide more flash than RAM , it is important for developers to consider the data side of the space occupied by the kernel. For the kernel's scheduler, bloated RAM usage can cause excessive overhead, which reduces the benefits typically associated with multitasking application code.

 

The kernel can use two methods to allocate the basic resources required for multitasking: the responsibility for allocating these resources can be left to the application code; or the kernel itself can handle the allocation. Certain variables and data structures are bound to exist in any kernel because they are critical to the execution of multitasking services, so these variables and data structures are completely in the kernel. However, for data structures such as TCBs ( or Task Control Blocks ) for recording the status of each task , or even for stacks that store CPU register values during context switching , the kernel provider can choose to allocate internally or rely on application code. .

 

Either way, if you implement one of the goals of flexibility, you can create an efficient kernel. Delaying the allocation of resources to application code may be the way to provide developers with maximum flexibility because it leaves a choice for static or dynamic allocation mechanisms. Micrium 's C/OS-III uses this approach to let application developers decide how best to allocate their TCBs and stacks. However, as with the TCB case in Micrium 's C/OS-II , forcing the implementation of resource allocation in the kernel can be an equally efficient method, as long as there is a way to configure the amount of allocated resources. Ultimately, application developers need a way to eliminate unused resources from the system's memory space.

File system efficiency

Most devices need the option to store data and log events as a temporary save space before being transferred to the cloud, or longer on the device. Any code designed for this purpose is a file system, whether written or tested by a developer or provided as part of an RTOS solution. The file system can also provide efficiency options. It can range from simple ( how many memory buffers are reserved ) to complex ( whether full POSIX operations are supported ) .

 

Developers should start with their data storage requirements. Is the data operating on site? Or just store and transfer later? How much content do you want to measure? Should their data be separated or combined? Store data before performing data collection on the device? Or send it to the cloud? How reliable is the storage medium? Is the design completely immune to power failures? First, some RTOSs provide a file system similar to FAT . This includes performing I / O code uses standard media formats (including folders and files). In general, it is limited in degree of customization and rarely protects against data loss in the event of a power failure.

 

Another option is Datalight's Reliance Edge, which uses the point of transaction (transaction point) to provide in exciting is: how to design flexibility help to improve efficiency. Reliance Edge provides customization of storage options. In the minimal use case, called " file system elements " , do not use folders or even file names. The data is stored in the numbered inode . The counts for these locations are determined at compile time, but the size is not predetermined. A " file " can contain more data than other files, and the storage medium is only full when the total size of the " file " reaches a threshold. You can also freely intercept, read, and write files.

 

In contrast, a file system in the FAT format has a media block dedicated to two file allocation tables. For each user data file, a file name and metadata are assigned - the former may be quite large to support long file names. If you use subfolders, their metadata and long file names will also take up space. All of this results in less free space for user data collected on the storage media.

 

For larger designs, Reliance Edge provides a more POSIX- like environment. The file names, folders, and file system metadata ( such as attributes and data and time ) are a configurable option. This may be a good choice for applications that expect to port POSIX interfaces from other designs . Ultimately, the final choice of file system requirements is directly related to the use case, which is by far the most effective resource solution.

Comprehensive consideration of efficiency

In addition to resource usage issues, efficiency has been a top priority for developers over the years when purchasing kernels, file systems, and other software modules. This is because the reason for proving the adoption of such a module is usually that it is a waste of time to write equivalent code from scratch. In other words, the most effective time utilization for application developers is to write applications, rather than tens of thousands of lines of infrastructure code.

 

However, just as the use of the kernel and file system does not in itself guarantee the effective application of CPU resources, the decision to introduce these modules into new projects does not automatically ensure that developers can use their time most efficiently. In order to really allow developers to focus on application-level code, embedded software modules must have an intuitive interface, and the interface must be fully documented. In the absence of useful documentation, developers may have to spend weeks trying to solve problems that are caused by function misuse.

 

Unfortunately, even poorly documented code can waste development time unnecessarily if the described functionality is not reliably implemented. That's why, in addition to requiring complete documentation, developers should seek evidence of reliability when selecting software for a new project — such as past certification or test results. In fact, each software module is reliable in the publicity literature, but only some of the modules provide a reliable proof that they can be “ consistent ” .

 

For example, Datalight 's Reliance Edge has source code for a variety of different tests, allowing application developers to verify that the file system will run reliably in a particular development environment. Example: Developing the most efficient IoT medical device What type of development environment might be present in an IoT project? Given the rapid growth in connectivity requirements in embedded devices, it is not possible to define a specific combination of hardware, software, and toolchain to define this range. Finding a single end product that fully represents the range of possibilities for the Internet of Things is equally challenging. Nevertheless, discussions in this area can certainly benefit from concrete examples. A method that facilitates some of the challenges faced by developers of Internet of Things products, is not even a few years ago was considered kind of connected devices - blood glucose meter.

 

One of the key features of the product is market capacity: the blood glucose meter has millions of production per year and is often sold at below cost, even for free. Therefore, there is a lot of pressure to reduce BOM costs and minimize the development time of these instruments. However, developing these devices is not easy. In fact, the list of features for the new meter may include color display, data logging, and cloud connectivity. Faced with such a complex list of requirements, the team responsible for the development of blood glucose meters certainly wants to take advantage of the multitasking capabilities of the kernel. Optimizing the memory footprint of the core may be one of the team's primary concerns, as typical high-volume, low-cost MCUs often have limited flash and RAM resources. The key step in reducing space usage is to remove any kernel resources ( such as TCB) that are not needed by the application code . It would also be helpful to eliminate the waste in the stack required for various kernel management tasks of the application.

 

Tools like Micrium 's C/Probe can be used to achieve this goal, as shown in Figure 2 . C / Probe insight into the application stack based kernel (stack) and heap usage (heap), and enables developers to easily identify inefficiencies and improve efficiency. When implementing the data logging function of the blood glucose meter, the instrument development team will benefit from the functionality of the file system. Here, as with the kernel, using off-the-shelf software modules can ease the burden of team development infrastructure code, helping to achieve a much shorter, cost-effective development cycle. The use of processor resources as one of the overall constraints of the system inevitably needs to be considered when developing data logging code, so the use of an efficient transactional file system is ideal.

Figure 2 : FAT file system and Reliance Edge ( Source: Datalight)

 

With file system solutions such as Reliance Edge , development teams can easily minimize service to allow as much storage as possible for applications.

Figure 3 : μC/Probe provides runtime access to system data, including kernel statistics ( source: Micrium) .

in conclusion

While each embedded system has its own unique requirements, the methods that are used to maximize the efficiency of the meter can be easily used to develop other types of equipment. Component reuse has long been recognized as a best practice for software development. Many of the infrastructure code required for blood glucose meters ( including real-time kernels and file systems ) can be used as the basis for other devices, with minimal changes in addition to a few low-level code. By selecting quality-ready off-the-shelf components as the basis for the project, the development team can ensure that their resources and embedded hardware are effectively utilized, and can focus on writing innovative application code to differentiate their designs across a wide range of products. The dawn of the Internet of Things innovation has begun to shine


Resin Sand Casting

Die casting is the pressure of metal molds on a die casting machine and is the most productive casting process. Die-casting machines are divided into two categories: hot-chamber die-casting machines and cold-chamber die-casting machines. The hot chamber die casting machine has a high degree of automation, less material loss, and higher production efficiency than the cold chamber die casting machine. The aluminum alloy die castings that are widely used today can only be produced on cold chamber die casting machines due to their high melting point. The main feature of die casting is that the molten metal fills the cavity under high pressure and high speed, and is formed and solidified under high pressure. The air in the cavity is wrapped inside the casting to form subcutaneous pores, so aluminum alloy die castings should not be heat treated, and zinc alloy die castings should not be sprayed on the surface (but can be painted). Otherwise, the internal pores of the casting will expand due to thermal expansion and cause the casting to deform or bubble when the above-mentioned treatment is performed. In addition, the mechanical cutting allowance of die castings should also be smaller, generally around 0.5mm, which can not only reduce the weight of castings, reduce the amount of cutting to reduce costs, but also avoid penetrating the surface dense layer and exposing subcutaneous pores, causing The workpiece is scrapped.

Resin sand molding,Resin Coated Sand Mold Casting,Furan resin sand casting,Green sand casting

Tianhui Machine Co.,Ltd , https://www.thcastings.com