Enhancing Mobile Medical Application Security

by Jeridiah Welti / February 22, 2024

Exploring the realm of mobile medical applications, we've dissected risks, explored common pitfalls, and discussed best practices in previous discussions. This concluding segment of our three-part blog series delves into the vital aspects of testing for security, accompanied by the tools that fortify the integrity of mobile medical applications. That said, a wide variety of tools and testing are appropriate for mobile applications during various phases of development—and many are the same as used in any software development. Let's take a closer look.

Security Integration throughout the Development Phase

In the intricate dance of mobile app development, security design isn't a mere afterthought—it's a dynamic component embedded in the entire development lifecycle. The challenge lies in the ever-changing cybersecurity landscape since the environment within which the product exists is not static. Because new threats continuously emerge, what is considered secure today might be vulnerable tomorrow. Comprehensive security strategies necessitate a continuous commitment from project initiation to post-launch vigilance.

Testing Tools and Their Phases—Threat Assessment—A Holistic Overview

Before we explore testing phases, let's glance at the array of tools that safeguard mobile applications:

  • Software Composition Analysis (SCA)
  • Static Application Security Test (SAST)
  • Dynamic Application Security Test (DAST)
  • Fuzz Testing
  • Sniffers
  • Vulnerability Scanners

These tools find their roles in different phases of development, namely planning and design, implementation, and testing. While these phases are relatively self-described, here is a little further context for clarity:

Planning and Design Phase—Setting the Security Foundation

Critical decisions mold the application's fate during the planning and design phase. Choices range from operating system (OS) selection (Android or iOS) to protocol versions (Bluetooth, Wi-Fi, or near-field communication - NFC). The challenge is to design for current needs and foresight for future features. During the planning and design phase, it is essential to review the planned hardware and software and determine what is necessary for the end goal's viability. While it is vital to have a future-proof design that can accommodate additional features, limiting the current design's footprint is essential to determine the product's attack vectors.

If the product only uses Bluetooth Low Energy (BLE) for communications, providing an entire Wi-Fi stack is unnecessary and adds additional points of vulnerability. If the designer can remove excessive technology, this will always be the best path. This could be non-populating transceivers on a custom board, removing unused drivers, or software calls in libraries. This has the bonus of reducing the complexity of the software and hardware package(s) and making overall testing easier.

ProdSecDesigner is one such tool that can help with this. ProdSecDesigner is a tool provided by Apraciti LLC that allows managing cybersecurity risks for medical devices. It generates threat maps based on design descriptions, helping teams refine risk tolerance and prioritize essential design features. By reviewing the output of this tool, the design team can determine what risk is acceptable and what design features are necessary or worth considering for removal.

Implementation Phase—Three Pillars of Security Testing

In the implementation phase, the development team takes the requirements determined in the previous phase to realize the project. It is at this phase where three essential tools play pivotal roles:


SAST, as the name suggests, is static. It looks at the syntax of the code and tries to determine security vulnerabilities. SAST analyzes the code, looking for potential issues like buffer overflows and structured query language (SQL) injection opportunities. This tool looks at coding patterns and recognizes implementations that lead to vulnerabilities later on. Many development environments have real-time syntax checkers that provide developers feedback as they write their code.

If the development is being done using continuous integration and continuous deployment (CI/CD) methods, there will be a gate before the software that can be integrated that will analyze for issues and return to the developer if it does not pass the standards set. In general terms, however, think of static analysis as scrutinizing code syntax and identifying vulnerabilities early in development, whereas real-time syntax checkers and CI/CD methods enhance efficiency.


Related to SAST is DAST, sometimes referred to as black-box fuzzing. DAST is the dynamic analysis of the application. This tool interacts with the product in an entirely different method. Where SAST has full access to the source code and how the inner workings are behaving, DAST works from the outside and interacts with the running application (either compiled or interpreted). DAST works to analyze susceptibility to typical user interactions and abuses of a system.

The setup of a DAST analysis is more involved than SAST (which can almost be considered a passive activity). For DAST setup, interfaces must be identified and connected for the test suite. Some DAST suites are effortless to use and can quickly test the application by pointing to it. They can provide feedback on multiple iterations of testing in a short amount of time. Think of DAST as a dynamic analysis that engages with the running application externally. The tool assesses susceptibility to user interactions and system abuses, providing critical insights.


Distinct from the SAST and DAST tools is SCA. SCA examines the content of the software and looks for known associated vulnerabilities. Many of these tools refer to databases such as the National Institute of Standards and Technology's (NIST) "Common Vulnerability and Exposure" (CVE) to determine if known vulnerabilities exist in the present code base. Each identified risk has a score associated with it. It is up to the product owner to determine what is an acceptable score and what is not. These could be vulnerabilities noted in a particular version of a library in the code base, an included open-source module, or perhaps even the system's OS.

Similar to the planning phase, this feedback needs to be reviewed to determine if there is a risk present and create a reasonable mitigation strategy based on the product's objectives. An unreferenced library is noted and can be removed from the code base, although it's likely optimized out of the final compilation. Other possibilities include:

  • a newer library version that might be available, along with a package upgrade.
  • a vulnerability that might exist and be present, but the usage of the system is not a concern.

Consider SCA as assigning risk scores, prompting product owners to assess and mitigate identified risks based on objectives. Although these tools are essential at specific project phases, they will all be continuously utilized throughout the implementation of the project since each plays a different but essential role in the software development lifecycle (SDLC).

Testing Phase—Verifying Security Measures

The testing phase shifts the perspective from white-box testing (meaning the tool could see some of the application's inner workings and evaluate based on that knowledge) to black-box or white-box testing based on risk thresholds. In other words, the toolset changes perspective as we move from the implementation to the testing phase. While testing will be highly dependent on the architecture of the system, it will likely include a vulnerability scanner, a fuzz tester, and a sniffer:

Vulnerability Scanners

A vulnerability scanner is a tool that can evaluate a system for the type of running OS, installed drivers, or available network connections. This information can be used to look for potential vulnerabilities based on the versions and types found. A tester may take this further and attempt to exploit such vulnerabilities.

Fuzz Testing

A fuzz tester is a tool that can interact with an interface and provide a variety of malformed or unexpected inputs. By doing so, fuzz testing helps to determine if the underlying system is resilient to such inputs. It also helps determine if the interface causes the system to do something unintended and allows the tester to manipulate it. These tools are generally automated and enable many different inputs to be tested against the system. The specific input type will vary depending on the interface but can be used against graphical user interface (GUI) input fields, universal serial bus (USB) interfaces, Bluetooth connections, and more. 


As the name suggests, a sniffer tool allows the tester to "sniff" (or monitor) a communications channel. In other words, sniffers monitor:

  • wired communication channels (such as the Ethernet or a USB) 
  • wireless communication channels (such as BLE, Wi-Fi, or NFC)

This monitoring allows the sniffers to identify potential data vulnerabilities (like unencrypted passwords) and verify if there is any information that can be gleaned from the data stream, like unencrypted passwords and session IDs.

Benchmark Helps You Deliver Safe and Resilient Mobile Medical Applications

In this journey through the intricacies of securing mobile medical applications, we've emphasized that robust security isn't a destination but a continuous expedition. By integrating security throughout the development lifecycle and employing targeted testing tools, developers can fortify their applications against evolving threats. This proactive approach ensures not only the validation of customer expectations but also the verification of the product's adherence to its secure design principles.

As healthcare technology advances, embracing a security-centric mindset remains paramount in delivering safe and resilient mobile medical applications. When cybersecurity matters to your medical mobile device, turn to Benchmark.

Featured Image: Blue circuit board, world map, and magnifying glass. Investigation for global cybersecurity. Elements of this image furnished by NASA.

Medical Connected Devices Automation

about the author

Jeridiah Welti

Jeridiah is a Systems and Software Engineer based in Benchmark's Winona, Minnesota location. He serves as Benchmark's cybersecurity lead in the design engineering group, championing cybersecurity design practices and processes across the organization. In his tenure with Benchmark, he has led software development for multiple connected medical devices. He serves as an exam developer for the NCEES engineering and ISC2 cybersecurity licensing boards. Jeridiah maintains a CSSLP certification among several other professional certifications, holds a B.S. in Computer Engineering from Kettering University, and is currently pursuing an MS in Engineering Management from Arkansas State University.

up-to-date content