header image header image header image

CHAPTER 7:

DEVELOPMENT OF AN INTEGRATED COVID-19 TRACKING AND MONITORING SYSTEM

Kashangabuye Jordan Masirika

ORCID ID: 0000-0003-1967-4024

Cape Peninsula University of Technology

Department of Mechanical Engineering

Bellville

Cape Town

South Africa


ABSTRACT

During the COVID-19 pandemic, several policies were instituted to help mitigate the spread of the virus and contact traceability as the virus is spread through person-to-person and person-to-surface contact. One of these was a mandatory completion of a COVID-19 compliance form that captured the details of users of a premises before being granted access. This project proposes an integrated COVID-19 monitoring and tracking system that ensures that the filing process of the compliance forms is fully automated, thus eliminating person-to-person and personto- surface contact. The automated system will use machine learning algorithms for facial recognition and Natural Language Processing developed in Python to interact with the client through voice prompts.

Keywords: tracking system, machine learning, algorithm, monitoring system, COVID-19, pandemic, facial recognition, natural language processing, deep learning

INTRODUCTION

On the 7th of January 2020, SARS-CoV-2 was confirmed as the causative agent of the Coronavirus Disease 2019, or COVID-19 in short. Since then, the virus has spread to more than 90% of countries in the world, including South Africa (South African Department of Health, 2021). This situation forced various countries around the world to devise ways to correctly track and prevent infection. SARS is a lifethreatening respiratory disease caused by its associated SARS coronavirus. It is an animal virus that can affect humans after crossing the species barrier. The first human occurrences of the sickness are thought to have occurred in the Guangdong Province of the Chinese Republic in November 2002 (European Centre for Disease Prevention and Control, 2021). However, the symptoms were only discovered three months later. Following its discovery, the virus spread from individual to individual, largely by inhalation of droplets.

The virus’ incubation phase lasted anywhere from three to 10 days. A high fever would develop, followed by non-specific symptoms, and in many cases, diarrhoea (Cherry & Krogstad, 2004). Due to the Chinese outbreak, the virus moved rapidly from China to other Asian countries. A minor number of cases was also reported in various countries, including four in the United Kingdom, and a large outbreak in Canada. Fortunately, it was brought under control in July 2003, thanks to a program that consisted of isolating individuals suspected of having the disease and screening all passengers flying from the afflicted nations for symptoms of illness. Another minor SARS outbreak was connected to a Chinese medical laboratory in 2004. It was assumed to have been caused by a person coming into close contact with a sample of the SARS virus, rather than transmission from animal to person or person to person (National Health Service, 2021).

LITERATURE REVIEW

This section highlights current COVID-19 tracking and monitoring systems used by various countries around the world. These will be compared and the strengths and weaknesses of each will be drawn out. The majority of countries use mobile tracking apps to record data that can be used for contact tracing in the event of a transmission.

i. Aorogya Setu (India)

Among the countries most affected by this pandemic, India is placed second with approximately 29 million cases reported (Worldometer, 2021). This provides an idea of the severity of the situation for the Indian government. As such, they developed a tracking tool called Aarogya Setu. It is a mobile application developed by their Health Ministry to track and sensitise Indian citizens to flatten the curve. The app uses Bluetooth and GPS technologies to alert users when they are near a person infected with COVID-19 (Gupta, Bedi, Goyal, Wadhera & Verma, 2020). A self-assessment test feature was added that provided health and symptom related questions, and from the answers provided would determine the risk level for said user in different colour codes. This technology has room for Artificial Intelligence and Computer Vision to detect COVID-19 patients.

ii. Private Kit: Safe Paths (United States of America)

This application was developed by the Massachusetts Institute of Technology. It contains the same features as the Aarogya Setu application. Its main goal is to simplify contact-tracing procedures to effectively reduce the spread of the disease. It uses GPS-based arrangement; but because this technology is difficult to anonymise, developers are looking into specialised solutions using encryption. The application gathers a user’s location information by keeping a time-stamped log using little storage space, far less than what is needed for a single photo. The user’s information is encrypted and never leaves their phone, with the purpose of preventing other users from recognising them. If a user tests positive for the virus, they can make use of a QR code to convey location information to public health researchers (Oliver et al., 2022).

The trend that can be seen with the two examples is that they are focused on tracking the user themselves. This is an efficient way to track and immediately alert people who might be at risk, but it relies heavily on the belief that people will actually register and use the application, even if it only runs in the background. An example of this can be seen with the South African COVID-19 tracking application, COVID Alert South Africa. South Africa currently has a total population of 58 million people, with 69% being active internet users i.e., 38 million using the internet on their smartphones (The World Bank, 2021; Johnson, 2021). Despite these numbers, the application only has a reported one million downloads, which is less than 3% of active internet users (Google Play Store, 2021). Additionally, the use of mobile applications does not allow for temperature testing. This means that even if a user has their data on any of the various tracking applications, they are still required to queue to have their temperature taken. The following sections will provide an overview on the hardware and software components involved in providing the tracking and monitoring capabilities. These sections will contain specific aspects of the device such as system, mechanical, electrical and algorithmic overviews.

DESIGN

i. System Design

Figure 1. Overall system layout

Figure 1. Overall system layout

Figure 1 shows the connection between the various parts of the system. Each of these will be studied to understand their use in the system. We have the camera and the microphone as the inputs, a seven-inch display as the, the storage housing the operating system (OS), and the power source.

• Raspberry Pi 4

Figure 2. The Raspberry Pi 4 Model B

Figure 2. The Raspberry Pi 4 Model B

The Raspberry Pi 4 (Figure 2) is the latest version of the low-cost Raspberry Pi computer. It can be defined as a credit-card sized circuit board similar to those found in a computer, but much smaller. It can do a surprising variety of tasks. For starters, the Pi boards are used as media centres, file servers, vintage games consoles, routers, and network-level ad-blockers by amateur computer enthusiasts. That is, however, only a small sample of what is conceivable. People have used the Raspberry Pi to construct tablets, laptops, phones, robots, smart mirrors, capture images on the edge of space, and execute experiments on the International Space Station, among other things. With the Pi 4’s increased speed, its ability to decode up to 4K resolution videos, quicker storage via USB 3.0, and faster network connectivity via real Gigabit Ethernet, many new applications are now possible. Its latest model, the 4B, uses a quad-core Cortex-A72 processor, with tripled performance compared to its predecessor, and is also the first Pi to offer dual 4K at 30 FPS, which is a bonus for creatives who need additional desktop space.

Despite being a miniature computer, the Raspberry Pi 4 still boasts a number of input and output ports that can be used for a wide of variety of applications. Figure 5 shows the ports used for data transfer: the old generation USB 2.0, with speeds capping out at around 60 Mbps, the first generation of USB 3.0 technology with data transfer speed going up to 600 Mbps, and a Gigabit Ethernet port to allow for faster internet connectivity compared to normal wireless connectivity.

It is also equipped with a pair of HDMI 2.0 ports, a USB Type-C power jack for power delivery, and a 3.5-millimeter jack that serves as an analogue audio/video-out port. However, in addition to the familiar aforementioned ports used in the majority of computers, the Raspberry Pi 4 also delivers enormous flexibility by including less common ports such as Camera Serial Interface (CSI), in order to connect basic camera proprietary or non-proprietary modules, a Display Serial Interface (DSI), in order to connect their proprietary seven-inch display and a microSD card slot that enables the ability to add storage to the system, as it comes with no onboard storage of its own.

• Camera module

Figure 3. The camera module

Figure 3. The camera module

The camera module (Figure 3) is a single channel, eight megapixels module supporting the CSI-2 bus interface. It can record at a maximum frame rate of 30 FPS. This camera board connects to any Raspberry Pi or compute module, allowing highdefinition videos and still photographs. It uses Sony’s IMX219PQ image sensor, which provides high-speed video imaging and sensitivity. Image contamination, such as fixed pattern noise and smearing, is also decreased. A 15 cm ribbon cable is linked to the camera module and connects straight into the Raspberry Pi’s CSI connector. It boasts a number of features namely a still picture resolution of 3280 x 2464, an automatic 50/60Hz luminance detection as well as 720p and 1080p resolution at 60 frames per second. It is prudent to note that the camera mentioned above is used only for testing purposes. Despite its great benefits, it is not made to be deployed for heavy usage. Figure 4 shows the camera that will be more appropriate for such a task.

Figure 4. Logitech Brio 4K HDR

Figure 4. Logitech Brio 4K HDR

It is the Logitech Brio 4K HDR (Figure 4), a professional grade web camera, with a maximum resolution of 4K Ultra HD at 30 FPS, Logitech’s own RightLight™ 3 with HDR technology for accurate colour, alongside a 5x zoom for fine details. It has a diagonal adjustable field of view ranging from 65o to 90o. It is equipped with noisereducing, dual, omni- directional microphones that provide premium audio performance. It connects through USB 3.0 or USB Type-C.

• Display screen

The Raspberry Pi LCD Touch Display is a Raspberry proprietary display that enables interaction between the user and the Raspberry OS. The 800 x 480 display connects to the board via an adapter board which handles power and signal conversion. Only two connections to the Pi are required: the power from the Pi’s General-Purpose Input/Output (GPIO) port and a ribbon cable that connects to the DSI port. It also has support for up to 10-finger touch and an on-screen keyboard for full functionality without a physical keyboard or mouse.

• Romoss™ 30000 MAh Power Bank

The Romoss™ 30000mah is a high-capacity power storage device that normally allows for the charging of multiple devices. Its lithium polymer batteries are safe and long- lasting. Fit Charge technology allows it to work with a variety of gadgets, including smartphones, tablets, and other electronic devices, while also preventing over-charging. Its ergonomic hand-held design conforms to its simplistic appearance and superior feel. Over-charge, over-current, reset, over-discharge, temperature, power-surge, anti-reverse, short-circuit, and RFI protection are among the 10 intelligent safety features present on the device. It features dual USB Output Ports, lighting, micro-USB, and Type-C input and is widely compatible with a range of devices. It is powered by a 30000 MAh (111Wh) battery and takes an input voltage of 5 V at 2.1 A, while outputting 5 V at 2.1 A or 5 V at 1 A. The connection of all previously mentioned modules is shown in Figure 5.

Figure 5. Connection between the previously mentioned modules

Figure 5. Connection between the previously mentioned modules

ii. Mechanical Design

The mechanical design is made to accommodate all the previous components listed. In Figure 6, the camera and screen slot can be seen. The screen will be inserted from the top, and the camera will be screwed in. The total height of the stand is based on the average height in South Africa, which was reported to be 166.7 cm by Business Tech in 2016.

Figure 6. 3D view of the stand

Figure 6. 3D view of the stand

Figure 7 shows both the various slots made for the power bank, the screen, and the Raspberry, as well the two materials constituting the stand: plexiglass and steel. The plexiglass is going to be bolted to the steel that will make up the frame of the stand.

The metal bar constitutes the support for the entire stand. It consists of two main parts: two 1700 x 120 x 25 mm steel bars to provide a frame for the stand and a plate that slides into the bars to provide balance. Mild steel was chosen for this project due its excellent properties. It has an unparalleled weldability and machinability. It is a type of low carbon steel (contains a maximum of 2.1% steel) that enhances the properties of the pure iron. Its outstanding characteristics have led to an increase in its use across a wide range of sectors. Among the various grades of mild steel present, the one considered for the purpose of this project is EN 1.0301, having equivalent grades such as AISI 1008, C10 or DC01. It has a very good weldability and is commonly used for extruded, forged, cold-headed, and cold-pressed parts.

Figure 7. Additional view of the stand

Figure 7. Additional view of the stand

Plexiglass on the other hand is a transparent thermoplastic often used in sheets due to its light weight and shatter resistance, compared to glass. Chemically, it is the synthetic polymer of methyl methacrylate. It is a robust, resilient, and lightweight material that can withstand more impact than glass. Due to its stronger environmental stability than most other plastics, such as polystyrene and polyethylene, it is also excellent for outdoor applications. Plexiglass is a highly beneficial invention because of these advantages, as well as additional features including a long service life, excellent light transmission, and ease of processing. The sheets used for this project are 3 mm thick and cut at specific lengths. Total measurement can be found in the appendix. The total weight that will need to be supported by the steel bars are the plexiglass sheets weights including the weight of the various components is shown in Table 1 below.

Table 1: Total weight to be supported

iii. Electrical Design

The Raspberry Pi 4 requires an input voltage of 5V to operate without problems. A lower voltage renders some features unusable. Due to its wide variety of ports, it enables us to reduce the number of devices requiring external power to only one, with every other device being plugged directly into it. The camera, with its embedded noise cancelling microphone, can be plugged into one of the USB 3.0 ports and the seveninch touch screen display can be powered by the all-purpose pins present on the Raspberry.

There are multiple ways to provide the required voltage of 5 V to the system. Among those, three were chosen depending on their price range and their ability to be used during loadshedding periods in South Africa.

• Power bank

The Romoss™ mentioned previously is a high-capacity power storage device that possesses a number of intelligent and safety features, such as over-charge, overcurrent, power- surge, and short-circuit protection. These features are favourable to the Raspberry Pi 4 as the device is sensitive to unexpected electrical changes. Additionally, its 30000MAh battery capacity allows it to keep the device powered beyond the two-hour window that is normally the duration of power outages during loadshedding. Finally, it is capable of being used while plugged into a power source (a wall socket, or any power source providing 230V).

• Uninterruptible Power Supply (UPS)

When the input power source or main power source of electrical equipment fails, a UPS (Figure 8) supplies emergency power. It differs from an auxiliary or emergency power system or backup generator in that it protects against input power interruptions almost instantly by delivering energy stored in batteries, supercapacitors, or flywheels. Depending on the budget, these can range from having a relatively short on-battery run-time, of a few minutes, to a decent amount of power delivery time, of two to four hours. It is a continuous power system. A UPS is often used to protect hardware such as computers, data centres, telecommunications equipment, or other electrical equipment against power outages that could result in injuries, fatalities, major business disruption, or data loss.

Figure 8. Mecer 2000VA Line Interactive UPS (Takealot, 2021)

The Mecer 2000VA UPS was chosen for this method due its AC voltage regulation, adequate frequency range, and a performant alarm system that immediately alerts the user in case of low battery, overload, and electrical faults in the unit. If a UPS system is chosen as the main power source for the Raspberry, it will need to be used in conjunction with Raspberry’s proprietary power brick that will convert the 230V analogue voltage coming from the UPS to a stable 5V output voltage required.

• Power supply system with backup generator (Figure 9)

This method required the design of an in-house power supply system that can provide the required voltage to the Raspberry.

Figure 9. Power supply design

Figure 9. Power supply design

The power supply was designed using the NI Multisim tool and was able to provide a smooth 5V DC output voltage (Figure 10) thanks to the LM7805.

Figure 10. DC output voltage

Figure 10. DC output voltage

A transformer, the Indel TS25/11, was used to provide the minimum required voltage of 7V, while also compensating for the 1.4V drop across the bridge rectifier made of IN4001 diodes. As for the capacitor, their values were determined from a combination of the required capacitance at the input and output of the LM7805, and the fact that the capacitor voltage needed to be at least 20% more than the secondary voltage. This required to determine the VRMS to select the proper capacitor.

Formula 1

The proper capacitance can now be found as:

Formula 2

The practical standard value close to this capacitance value is 500uF, thus it was selected. So far, the cost is kept relatively low due to the components used for the design. However, if this is to be implemented in South Africa, a backup generator will also be bought to account for emergency power generation in the event of power outages. Specifications for generators vary greatly, but their prices are higher than the other two alternatives discussed, thus making this the most expensive alternative.

iv. Algorithm Design

The algorithm behind the entire system, from the graphical user interface (GUI) to the face detection algorithm itself, were designed and coded in Python programming language.

• Face detection

Face detection makes use of multiple technologies to work. To build the face recognition system, face detection will first be performed, then face embeddings will be extracted from each face using deep learning. A face recognition model will be trained based on those embeddings, and finally the recorded faces can be recognised in both images and video streams using the Open-Source Computer Vision Library (OpenCV) as shown in Figure 11. This is a library of programming functions aimed at real-time computer vision. The library is available to use on multiple operating platforms, while also being free for use under the open-source Apache 2 License. Its application areas include, but are not limited to, facial recognition, gesture recognition, human-to-computer interaction, and object detection. For supporting some of the areas mentioned, it also included a statistical machine learning library that contains various functions such as boosting and a support vector machine. It was originally written in C++, but contains binding in Python, Java, and MATLAB.

Figure 11. Overview of OpenCV's face recognition pipeline. (Amos, Lidwiczuk & Satyanarayanan, 2016)

Figure 11. Overview of OpenCV's face recognition pipeline. (Amos, Lidwiczuk & Satyanarayanan, 2016)

To build the face recognition algorithm, deep learning was applied, which is a sub-field of machine learning concerned with algorithms inspired neural networks, with two steps: to apply face detection, which will detect the face location but not identify it, and in order to extract 128-d feature vectors called embedding that quantify each of the faces. This is shown in Figure 12. To increase detection accuracy, facial landmarks, key facial structures such as eyebrows, eyes, and noses, can be computed, making it possible to pre-process and align the face.with I = 1A and f = 50Hz (standard wall frequency in SA).

Figure 12. How face embedding is computed using a deep learning face recognition model (Schroff, Kalenichenko & Philbin, 2015).

Figure 12. How face embedding is computed using a deep learning face recognition model (Schroff, Kalenichenko & Philbin, 2015).

The model responsible for quantifying each face is from the open-source OpenFace project, a Python and Torch implementation of face recognition with deep learning. This model computes a 128-d embedding that quantifies the face itself using the data inputted into the network, and the triplet loss function. The triplet loss function is a loss function for machine learning where an anchor input is compared to a positive and negative input. Thus, to train the model with deep learning, each data input needs to include three inputs: an anchor image, a positive image, and a negative image. The anchor is the current face, the positive image is an image that contains multiple faces as well as that of the anchor face, and the negative image does not contain the anchor face. The neural network computes the 128-d embeddings for each face, then tweaks the weights of the network via the triplet loss function such that the embeddings of the anchor and the positive image are closer together, while the anchor and the negative image are farther away from each other. That way, the network can learn to quantify faces and return confident face recognition.

The face recognition and machine learning integration were done using Python programming language. Python was chosen for this project due it being an objectoriented, high-level programming language with dynamic semantics. Additionally, OpenCV integration resources are well documented online, thus making the code debugging process easier.

• Graphical User Interface

Due to the nature of this project, a GUI had to be developed to facilitate the entire process the device is attempting to replace. For that purpose, an open-source Python framework, named Kivy, was used. It is a set of libraries that enable the development of multi-touch application software with a natural user interface. It is distributed under the terms of the MIT License and can be run on the majority of operating systems. Kivy enables the creation of individual “screens” or pages which can be customised independently of each other and can be freely accessed only from the application. The screens are divided in layouts which are independent sections where a set of given data can be put. These data can range from text to images and videos. A total of seven screens were made.

The five main screens are the main screen, the register screen, the manage screen, and the pre- and post-detection screens.

CONCLUSION

The goal of this project was to design a system that would eliminate the need for physical COVID-19 compliance forms as well as attendance filing. To do so, a system was designed focused on a Raspberry Pi 4 due to its high performance and features. This system involves the use of a number of additional components such as a camera and a screen to facilitate the two aforementioned problems. The Raspberry Pi was powered by a power storage system during in-house testing. An enclosure was also designed to hold the components involved. This enclosure was made of a combination of plexiglass and steel bars. To perform the attendance filing portion of this project, Python programming language was used in conjunction with machine learning technology, more specifically a subset called deep learning, which will extract certain facial features and train a model to recognise them more efficiently. When reading the live feed from the camera, the algorithm will automatically detect the user’s face if they are saved in the database, and to which a voice-prompt based interaction with the system will be initiated to assist the user in filling out COVID-19 compliance questions. The answers will be registered on an Excel sheet, which is saved locally, along with the time the user entered. Additionally, these details can be accessed at any given time should an urgent need, such as an infection outbreak and contact tracing requirements, arise.

RECOMMENDATIONS

The recommendations for this project can be divided into three sections: hardware, software and general.

i. Hardware

Due to logistical reasons, a certain number of critical components could unfortunately not be ordered in time. Among them:

ii. Software

Due to the complexity that they bring, certain functions could not be implemented in time. Among these are:

iii. General recommendations

This section contains all steps that could not be achieved due to lack of time, logistical reasons, or unavailability of the needed resources.

The building of the first enclosure prototype would have allowed the testing of the system in the field, which would permit the discovery of any bugs. The testing that was performed was mostly in a controlled environment, thus the results cannot be compared to what could possibly happen should the device be placed in a real environment with a large number of uncertainties.

REFERENCES

Amos, B., Lidwiczuk, B., & Satyanarayanan, M. (2016). Openface: A general-purpose face recognition library with mobile applications. Retrieved 31 March 2022, from https://cmusatyalab.github.io/openface/

Cherry, J., & Krogstad, P. (2004). SARS: The First Pandemic of the 21st Century. Pediatric Research, 56(1), 1-5. DOI:10.1203/01.pdr.0000129184.87042.fc

European Centre for Disease Prevention and Control. (2021). Retrieved 10 June 2021, from https://www.ecdc.europa.eu/en/covid-19/latestevidence/ coronavirsuses

Google Play Store. (2021). Retrieved 10 June 2021, from https://play.google.com/store/apps/details?id=za.gov.health.covidconn

Gupta, R., Bedi, M., Goyal, P., Wadhera, S., & Verma, V. (2020). Analysis of COVID-19 Tracking Tool in India. Digital Government: Research And Practice, 1(4), 1-8. DOI:10.1145/3416088

Johnson, J. (2021). Digital population in South Africa as of January 2021. Retrieved 7 September 2021, from https://www.statista.com/statistics/685134/southafrica-digital-population/

National Health Service. (2021). Main Symptoms of Coronavirus (COVID- 19). Retrieved 15 June 2021, from https://www.nhs.uk/conditions/coronaviruscovid- 19/symptoms/main-symptoms/

Oliver, N., Letouzé, E., Sterly, H., Delataille, S., De Nadai, M., & Lepri, B. et al. (2022). Mobile phone data and COVID-19: Missing an opportunity?. Retrieved 31 March 2022, from http://arxiv.org/abs/2003.12347

Schroff, F., Kalenichenko, D., & Philbin, J. (2015). FaceNet: A unified embedding for face recognition and clustering. 2015 IEEE Conference On Computer Vision And Pattern Recognition (CVPR). DOI:10.1109/cvpr.2015.7298682

South African Department of Health. (2021). About COVID-19 (Coronavirus). Retrieved 10 June 2021, from https://sacoronavirus.co.za/informationabout-the-virus-2/

Takealot. (2021). Mecer 2000VA Line Interactive UPS. Retrieved 22 October 2021, from https://www.takealot.com/mecer-2000va-line-interactiveups/ PLID34152441

The World Bank. (2021). Population, total - South Africa. Retrieved 10 June 2021, from https://data.worldbank.org/indicator/SP.POP.TOTL?locations=ZA

Worldometer. (2021). Countries Where Coronavirus Has Spread. Retrieved 10 June 2021, from https://www.worldometers.info/coronavirus/countries-wherecoronavirus- has-spread/