Smarter electronic devices are tackling greater workloads and more complex tasks with the aid of artificial intelligence (AI) technology. AI analyzes data to recognize patterns and anomalies, helping us make more informed decisions based on data analysis. Just as advances in the quality and usefulness of AI have resulted in increased instances of AI implementation, advances in processing and data management have resulted in AI implementation in Edge Computing solutions. This implementation allows efficient use of data, processing it as close as possible to the source that produced it to avoid delays.
Living on the “Edge” can be harrowing for electronics — think city streetlights, oil platforms, big rig trucks, or even a battlefield. Rather than the comfort of a perfectly climate-controlled server room, the places where data is collected and used can be scorching hot, freezing cold, or subject to shock and vibration. Fortunately, Benchmark’s design engineers offer seasoned experience in the combination of skills required to develop custom electronics beyond what is possible with commercial-off-the-shelf (COTS) devices to take advantage of AI at the Edge.
AI Simplified —Training and Inference
In basic terms, AI refers to “teaching” an electronic device just enough about an application that it can use what it learns to “infer” a possible solution or provide a logical answer. We see AI examples every day when we text or conduct an internet search. When we interact in this way, our mobile phones or computers try to predict what we are going to type next.
Another easy-to-understand example is airport security scanners. If we want a security scanner to identify a weapon, it will need to be “trained” with many images of several diverse types of weaponry. Part of the training would also require images of those weapons in many orientations. To further the precision, the AI would need to be trained for things that may resemble a weapon but are not weapons (e.g., a cordless drill or a hairdryer). The more data with which we train the algorithm, the more accurate it will become. Once the system is trained, a scanned image can then be processed by the AI computer to search the image for dangerous devices. This identification step is typically referred to as “inference.” The precision of the inference is based on how well the AI computer has been trained.
As you can imagine, training an AI system requires enormous amounts of data and computation power, which can take weeks or even months, depending on the application. The inference portion of AI requires much less processing power but extremely low latency when executing the inference models. What is important to note is that this inference part of AI can be implemented at the Edge (or point) of the application.
CPUs, DSPs, GPUs, and AIs
General-purpose central processing units (CPUs) have been solving problems for a long time. For many years, faster and more powerful CPUs were the answer to solving harder and larger problems. In the 1980s, NEC, Texas Instruments, and others modified the general CPU architecture, optimized it to solve digital-signal processing specific algorithms, and introduced digital-signal processors (DSPs). These devices have a unique hardwired multiplier and accumulate functions that enable them to be efficiently used with audio signals, speech processing, sound navigation and ranging (SONAR), voice recognition, and more.
A similar scenario driven by the gaming industry resulted in the development of graphics processing units (GPUs). The multicore architecture of GPUs was originally developed to accelerate and render 3D graphics for gaming and video editing. GPUs have since evolved and their processing power can be used in many applications, including the training of the AI system.
AI-specific processors are being developed to optimize both training and inference, but AI at the Edge requires processors optimized for inference and low latency. Optimizing for inference is done by enhancements for power efficiency and real-time processing. AI processors significantly accelerate identification, predictions, and calculations required by AI algorithms via parallel processing. They are also optimized for the programming languages that are specific to AI applications.
Some of the companies that have Edge AI solutions include well-known companies such as NVIDIA, Intel, and AMD. Lesser-known companies include Hailo Technologies, SiMa.ai, AlphaICs, EdgeCortix, Expedera, and Untether AI.
The Growing Adoption of AI at the Edge
Adding AI to an Edge application creates the opportunity for advanced data processing at the location of the device. AI can be added to assist a design with standard applications (e.g., image/pattern detection or proximity detection). However, it is more likely that the custom application of AI would arm an emerging electronic product with a competitive advantage, especially when operating at the Edge. AI enables sensor fusion (or the processing of data) from multiple sources to coordinate a function or response. As part of autonomous vehicles, for example, AI combines data from vehicle-to-vehicle (V2V) communications as well as from radar and lidar proximity sensors to provide a low-latency, 360-degree awareness of surrounding traffic and pedestrians at intersections with the goal of minimizing accidents.
AI at the Edge holds great promise across many application areas, including consumer, complex industrials, and military environments. As cellular phone coverage from Fifth Generation (5G) networks expands physically and electronically (in terms of infrastructure and frequencies in use), AI plays an increasingly critical role in enabling the 5G communications networks to keep pace with the growing number of users. The use of voice recognition applied to a caller along with a recipient’s name is one example of a 5G network speeding and simplifying the calling process using AI.
Industrial factory and warehousing equipment are being designed with AI coordinating multiple sensors — including lidar and radar — to enable increased autonomy of the equipment and greater worker safety. Construction sites are also steadily increasing their use of AI-driven heavy construction equipment for autonomous, remote control of the equipment to meet the ever-evolving safety regulations. The growing adoption of unmanned aerial systems (UAS) in military applications is also driving the use of AI and machine learning (ML) within drones, especially for those used in intelligence, surveillance, and reconnaissance (ISR) functions. Consequently, it is prompting the rapid growth of counter-UAS equipment as a defense against spying and even weapons-bearing drones.
Applying AI That Matters
One of the greatest needs for AI technology and Edge Computing will come with the rapid growth of the use of Internet of Things (IoT) devices. IoT devices can connect, communicate, and automate processes across industries. And, when teamed with AI and ML technologies, these specialized devices can provide decision-making capabilities as part of an automated operation.
While general-purpose AI-driven IoT devices can be constructed with COTS equipment, more specialized IoT devices require customized AI approaches capable of targeted, function-specific solutions. Customized AI circuitry is important where timing and processing latency are critical. For electronic product developers seeking AI advantages, Benchmark’s engineers start by considering all aspects of hardware design that can impact signal-processing speed. To provide intelligent AI solutions that also meet reduced size, weight, and power (SWaP) requirements, our engineers often develop mixed-technology modules that contain analog, digital, and RF technologies and, in some cases, photonics technology as part of the module. The use environment also plays a role in determining the design, as thermal management or ruggedization may be required for use in outdoor applications.
While AI processing speed can be vital for battlefield applications — such as in UAS and unmanned ground vehicles (UGVs) — it is also important for industrial IoT (IIoT) applications where it can speed process control loops. Efficient speed and timing can eliminate delays to cloud-based applications and overall delays in production cycles that count on high AI processing speeds.
Hardware Design and In-house Manufacturing
The success of any Edge AI-enhanced design depends on the effectiveness of the models and the data supporting the device. Excessive latency is eliminated by optimizing the data within the device itself rather than in a remote, offsite server. This design approach based on AI processing boosts a device’s onboard memory requirement but reduces signal processing latency and simplifies timing challenges. AI at the Edge has proven successful for industrial applications, including sensor fusion of multiple signals from factory-floor robots and the latest advanced communications systems.
By focusing on the hardware design and devices used in implementing AI for an IoT device operating on the Edge, Benchmark’s design engineers can help you create AI-enhanced designs for your customers. Our software engineers support your designs by developing the necessary software and firmware for IoT device applications.
Benchmark is at the forefront of this transformative era, specializing in custom electronics projects where designing for excellence is our hallmark. If you are developing an electronic device with unique interfaces — whether for data collection, camera interfaces, or other specialized capabilities — Benchmark is your trusted partner. Reach out to Benchmark to learn more about our expertise and experience in designing and building specialized hardware for your AI applications, especially for Edge Computing applications where every second counts.