The 2021 McKinsey report, The Internet of Things: Catching up to an Accelerating Opportunity, reported on tailwinds (drivers moving IoT forward), headwinds (hindrances holding IoT back), and neutral factors (considerations that seem to have no impact on decisions to implement IoT). As a CWISA and wireless IoT professional, it is important to understand the motivators and demotivators related to IoT implementations. For this reason, we will briefly explain the tailwinds and headwinds here in the preface to the book.
As shown in the image above, the tailwinds are Perceived Value Proposition, Technology Performance, and Connectivity Performance. The headwinds are Cybersecurity, Talent, Interoperability, Change Management, Installation, and Privacy and Confidentiality.
From this we see that organizations see the value of IoT, which is an enhancement from the same 2015 report, when the value of IoT was lesser known and understood. Technology that meets the needs of many IoT solutions is also readily available today and provides the required performance.
Finally, as to tailwinds driving IoT forward, connectivity has been enhanced since the 2015 report with the implementation of 5G, enhancements to Wi-Fi, more hardware supporting additional protocols (such as LoRaWAN, 6LoWPAN, Thread, etc.), and better understanding of how to optimize the performance of these solutions.
These are elements that, according to the McKinsey report, keep IoT projects in "pilot purgatory" within many organizations. Sales engineers and CWISAs should consider the following:
Many organizations are concerned about the security and privacy issues that will be introduced with the implementation of IoT.
Certainly, a poorly implemented IoT solution can introduce significant security issues. However, while any new devices added to a network introduce potential new points of attack, a properly implemented IoT solution can actually increase the overall security and privacy of a network. The CWISA should be able to communicate this to an organization.
The existing operational model of an organization can often be the source of pushback against IoT solutions.
"We don't do it that way." and "The way we're doing it is working, why change?" are common resistance phrases heard in meetings.
However, the organization must have a commitment to move into the future of business if they are to continue to find success. Competitors will be moving toward IoT eventually — if not already — and the organization must evolve to survive.
Can you imagine an organization today saying they’ve never used electricity to do business in the past? Exactly.
This issue will be addressed more by organizational management, but the CWISA can certainly provide knowledgeable input to help motivate leadership toward the required change.
The issue here is one of scale.
As many organizations investigate IoT, they realize they may need to install:
They often crumble under the apparent insurmountable odds.
This is where the CWISA helps — by introducing automation in configuration, scripting, and image-based firmware/OS loading.
They can help organizations see that:
This makes deployment far less daunting.
A major problem organizations face when testing IoT solutions is that one system does not talk to another.
Sensors for one use case do not integrate into the same system as those from another. The result? Lack of interoperability.
This is where integration skills matter. CWISAs must understand how to modify and merge data from multiple systems into a single platform for analysis, control, and decision support.
There was a shortage of IoT talent three years ago when CWNP began developing the wireless IoT track — and the shortage continues today.
Organizations hesitate to roll out large-scale IoT solutions when they lack qualified staff who understand how these systems work — and how to work with them.
CWNP hopes to help close this gap — at least in the wireless IoT space, which is quickly becoming the largest part of the IoT world.
On the talent headwind, the research showed that attracting or finding qualified personnel in the industrial and public sectors (e.g., smart cities) is more difficult than in other sectors such as healthcare, smart homes, enterprise/office, and smart vehicles.
In both the industrial and public sectors, an aging workforce adds to the challenge. For example, in the U.S. public sector, of the roughly 85,000 tech specialists, only 3% are under 30. In the Treasury Department, there are more than nine tech specialists over 60 for every one under 30. In Transportation, that ratio climbs to 17:1. The Air Force has the best age balance at 1.3 to 1, while the Army and Navy stand at 3.1 to 1 and 3.8 to 1, respectively.
This aging demographic often presents challenges in adopting cutting-edge technologies. Certainly, anyone can learn IoT, but those nearing retirement are typically more hesitant. Therefore, education and recruitment strategies must directly address this generational barrier.
Understanding the headwinds is critical to overcoming obstacles in IoT implementation and unlocking the value it promises. Communicating that:
When it comes to change management, it's not just about showcasing performance improvements or cost savings. It's about leading organizations to embrace new ways of doing business—a cultural shift, not just a technical one.
This book marks the beginning of your journey into the talent side of IoT.
CWNP’s additional certification materials will help you dive deeper into interoperability and integration, while security—arguably the most important pillar—is treated across the board:
I’ve spent thousands of hours over the past six to seven years researching and analyzing IoT. I own 48 books in print and many more in digital form on IoT, Cyber-Physical Systems, and related topics—and I’ve read at least 90% of every page. Add to that hundreds of white papers, research publications, and industry reports.
Why do I tell you this? Because in all that time, I haven’t found four better books for wireless IoT than the CWNP Study and Reference Guides. I’m proud of this CWISA edition, and I expect exceptional updates to the CWICP, CWIDP, and CWIIP guides in their next iterations.
Am I biased? Without a doubt.
But when you’ve poured yourself into a project like this, it’s hard not to be.
I’ll let you—and the industry—decide.
—Tom Carpenter, October 2022 (Preface)
As an IT professional, you may already know that a career in IT is synonymous with continuous education.
We understand that you have selected this book, in part, for this very reason.
We all strive to better ourselves as network engineers because no two wireless environments or networks are the same.
This reality is the number one reason why CWNP has expanded their wireless certification program — to support professionals whose responsibilities lie in day-to-day operations and solution administration, giving them a deeper understanding of wireless systems and automatization.
The Certified Wireless IoT Solutions Administrator (CWISA) is the conductor — the person responsible for ensuring all components coexist and work together.
This is accomplished through understanding wireless fundamentals and having the knowledge of how each solution operates.
In this chapter, we will cover:
The intent is to provide you with a solid foundation, enabling you to:
If you're responsible for a wireless system, this knowledge will equip you to support and maintain it — from lab testing, staging, to production environments.
You'll be better prepared to isolate problems or keep solutions running flawlessly.
We will also explore:
The second objective of this chapter is to introduce you to:
Understanding these entities will help you:
This will give you clarity on the terminology used across the wireless industry.
Wireless technologies are a part of everyday life across much of the world.
Whether we are talking about BLE, Cellular, Wi-Fi, Zigbee, or any other wireless protocol — it all starts with radio signals.
We expect these technologies to just work, everywhere we go.
As a CWISA, it is your mission to keep them working.
Radio signals use electromagnetic waves, which are radiated in a specific direction using antennas. These same signals are received by remote antennas. When Heinrich Hertz proved the existence of electromagnetic waves in the late 19th century, little did he know his discovery would have the importance we know them to have today. Many scientists contributed to the initial invention of wireless networks: Tesla, Popov, Fessenden, and, more commonly known, Marconi are recognized as pioneers who developed the idea of wireless transmissions. Using amplitude modulation, a technique which uses the strength of a radio wave to represent a symbol (or information), such as a dash or a dot in Morse code, RF waves were tamed into a useful language. Through the years, the use of the radio evolved, from the wireless telegraph to full-blown radio broadcasts. During the World Wars, armies started using radio technologies to help with the war efforts. However, the problem with a fixed frequency such as the one commonly used in amplitude modulation is that a specific, narrow band of RF frequencies are used. The enemy camp could overload the frequency with energy to disrupt the communications. This is what jamming is in RF terms. It prevents the possibility of being able to decode anything by saturating the RF environment. Hedy Lamarr and George Antheil determined a way around this issue during the time of the Second World War: frequency-hopping spread spectrum (FHSS). The unpredictable nature of FHSS would prevent any attempt at jamming transmissions. Spreading data over wider channels was the initial method used by many radio-guided systems and is still used by some wireless technologies today, especially Bluetooth. Since one of the initial modulations used in the first IEEE 802.11 standard was FHSS, Lamarr is often called the Mother of Wi-Fi. In the 25 years since the 802.11 standard was developed, a lot has changed for Wi-Fi. We have seen maximum data rates go from 2 megabits per second (Mbps) to multiple gigabits per second (Gbps) on commonly available consumer devices. This has been achieved by improvements in the hardware to be more accurate and precise, new modulation rates, more spectrum, and better use of available spectrum. Many people solely rely on secure wireless communications for everything. Wi-Fi is not the only wireless network that has seen these types of improvements. Cellular carriers around the world, first commercialized data offerings on 2G Networks (CDMA or GSM). These evolved to 3G, then LTE (4G), and the collection of standards 5G utilizes that are available today. This evolution brought data speeds from 9.6 kilobits per second (Kbps) to over 1 Gbps today.
Chapter 5 will get into more detail on radio waves themselves: what a wave is, how to know the frequency of the wave, etc. In this section, we'll address some fundamental concepts at a basic level. The rate or number of wave cycles per second is called the frequency. The measuring unit of radio frequencies is the Hertz, and one Hertz is one wave cycle. The term wavelength refers to the distance between two identical points on a wave. Stated differently, it is measured from one point on the wave to the recurrence of that point (see Figure 1.1). The power of a radio wave is called its amplitude. Phase, unlike frequency, wavelength, and amplitude, is not a characteristic of a single RF wave but is instead a comparison between two RF waves. Two waves are said to be in-phase or out-of-phase with each other. These four characteristics of RF are important to know because they are the key to modulation and encoding and they will be covered in greater detail later in the book. Modulation is the process of imposing information onto a radio wave. It commonly uses the amplitude, frequency, or phase of a radio wave to impose the information. Coding is the lexicon used to represent the data. To use an analogy, modulation is the voice of a speaker giving a presentation and coding is the language in use. The language first occurs in the brain and then the sounds that represent that language are modulated through the vocal tract. The RF spectrum is the entire range of frequencies considered to be radio frequencies within the electromagnetic spectrum. In other words, the electromagnetic spectrum ranges from near 0 Hertz (Hz) to above 1 zettahertz (ZHz), but the RF spectrum is typically considered that from near 0 Hz, typically identified as 3 kilohertz (kHz), to 300 gigahertz (GHz), though some extend it to 1 terahertz (THz). Remember, one hertz is a single cycle of the wave per second. One kHz is 1000 cycles of the wave per second. Table 1.1 breaks this down further. A radio band is a term used to define a continuous section of the RF spectrum. Some bands are designated for specific uses, such as HAM radio and weather radars. Other bands are licensed to specific operators, and the carrier allocates a technology, such as 4G or 5G, for use in the spectrum. Others are unlicensed and available for everyone to use within rules. The two well-known unlicensed bands are the 2.4 GHz Industrial Scientific and Medical (ISM) band and the 5 GHz Unlicensed National Information Infrastructure (UNII) band. Different rules apply to each frequency based on regulatory domains; however, the 2.4 GHz and 5 GHz ranges tend to be available in most parts of the world (or, at least, portions of them) and thus have been adopted by many wireless technologies. More recently, the 6 GHz band has been opened for unlicensed use by many regulatory domains. Wi-Fi is the currently most known wireless solution that operates in 6 GHz. The consistency of available spectrum has made the unlicensed bands a natural place for wireless hardware manufacturers to utilize while reducing costs. Wireless networks that operate within these frequencies include Bluetooth, LTE-U, LAA, Zigbee, LoRa, and Wi-Fi. Because the 2.4 GHz spectrum is unlicensed, many other devices, such as microwave ovens, cordless phones and baby monitors also compete for airtime. Other mobile devices such as cell phones use bands which are licensed for use by the providers. It is one of the reasons why they tend to be less prone to interference. Unlike BLE or Wi-Fi, mobile (cellular) networks have a wider coverage area. For this reason, some IoT vendors have chosen to initially offer their IoT products over mobile networks and not local wireless networks. Additional bands are specified as shared spectrum. This is a form of dynamic licensing for use. Effectively, use of a shared spectrum, such as 3.5 GHz, allows for incumbent use (existing users before the band was opened) to continue with preference while allowing other uses in the band based on a control system that determines the channels (small ranges of frequency within the band) that can be used.
Cellular and Wi-Fi are the most common wireless technologies. Wi-Fi is a pervasive technology in most homes, retail outlets, businesses, etc. The technology and protocols for Wi-Fi are very fault-tolerant, and many people think it is just magic, after all "How hard is it to not run wires?". To many people, Wi-Fi has even become the generic term for using the Internet or vice versa. It is common to hear someone say the Internet is down, when the Wi-Fi is actually down; or, to hear them say the Wi-Fi is down when the Internet connection is actually down. As such, understanding the components of the eco-system that provide Internet access is critical for a successful deployment and ongoing operations of the network. However, Wi-Fi is not the only wireless system and understanding how these various wireless technologies fit into the Open Systems Interconnection (OSI) model is necessary for system administrators. Many vendor documents will reference Layer 2 devices and Layer 3 devices. It is important to know what is meant by this terminology. As a wireless IoT solutions administrator you must be skilled at identifying issues with the systems you administer and where these issues fit in the OSI model. This ability will allow you to communicate well with other administrators and vendor support staff. If you have taken the CWNA, or another networking class, this section may be very familiar to you. The OSI model is illustrated in Figure 1.3. Communications travel down the layers on the transmitter side and up the layers on the receiver side. It is most important to understand Layers 1, 2, 3, 4, and 7. Layers 5 and 6 can become quite blurry and it is often challenging to identify actual protocols that operate at these layers. The layers are:
The Physical Layer, sometimes called the PHY Layer or simply the PHY, is responsible for providing the mechanical, electrical, functional, or procedural means for establishing physical connections between data-link entities. The connections between all other layers are logical, as the only real physical connection that results in true transfer of data bits (1s and 0s) is at Layer 1 - the PHY. For example, we say that the Layer 7 HTTP protocol on a client creates a connection with the Layer 7 HTTP protocol on a web server when a user browses an Internet website; in reality, this connection is logical, and the real connections happen at the Physical layer within a segment of the network, which is connected to another segment, which is connected to another segment, and so on, until the destination is reached. Dozens or hundreds of Physical layer connections could exist between two entities that have "a Layer 7 connection." It is amazing to think that my computer - the one I'm using to type these words - is connected to a Wireless Access Point (AP) in my office, which is connected to my local network, that is in turn connected to the Internet. Through connections — possibly both wired and wireless — I can send signals (that's what happens at Layer 1) to a device, which sends signals to another, and so on, until the signals reach the other side of the globe, possibly. To think that there is a potential electrical/electromagnetic-connection path between these devices and millions of others is quite amazing. It is Layer 1 that is responsible for taking the data frames from Layer 2 and transmitting them on the communications medium as binary bits (ones and zeros). This medium may be wired or wireless. It may use electrical signals or light pulses (both being electromagnetic in nature). Whatever you've chosen to use at Layer 1, the upper layers can communicate across it if the hardware and drivers abstract that layer so that it provides the services demanded of the upper layer protocols. Examples of Physical layer protocols and functions include Ethernet, Wi-Fi, and DSL. You probably noticed that Ethernet was mentioned as an example of a Data Link layer protocol. This is because Ethernet defines both the MAC (Media Access Control) sub-layer functionality within Layer 2 and the PHY for Layer 1. Wi-Fi technologies (802.11) are similar in that both the MAC and PHY are specified in the standard. Therefore, the Data Link and Physical layers are often defined in standards together. This is also true for many, if not most, wireless IoT protocols other than Wi-Fi. You could say that Layer 2 acts as an intermediary between Layers 3 through 7 so that you can run IPX/SPX (though hardly anyone uses this protocol today) or TCP/IP across a multitude of network types (network types being understood as different MAC and PHY specifications). The physical medium for wireless technologies is electromagnetic waves traveling through air or space (possibly through earth, in some cases). The PHY of a wireless protocol uses some method to modify the electromagnetic waves so that they represent a bit or series of bits. While wireless devices operate on several layers of the OSI, the radio transmitter/receiver (transceiver) falls into the Physical layer. Chapters 5 and 6 will go into more depth on how the transmissions happen, such as encoding and decoding of signals. For our purposes in this chapter, we need to understand that wireless radios are half-duplex (can either transmit or receive with a single radio, but not both concurrently) and need sufficiently clear airwaves to transmit. If two wireless systems are in the same frequency band, say a baby monitor and Wi-Fi Access point, you will need to ensure they are using separate channels or that they can work effectively together. If they are not on separate channels, you can expect some issues from time to time when they are both contending for airtime (attempting to transmit and possibly causing a collision). Key components to remember of the Physical layer are:
The Data Link Layer is defined as providing communications between connectionless-mode or connection-mode network entities. This may include the establishment, maintenance, and release of connections for connection-mode network entities. The Data Link layer is also responsible for detecting errors that may occur in the Physical layer. The Data Link layer, or Layer 2, may also correct errors that are detected in the Physical layer automatically. Therefore, the Data Link layer provides services to Layer 3 and Layer 1. For Layer 1, upon reception, it provides for error correction. For Layer 3, upon reception, it provides the TCP or UDP payload for processing. For Layer 1, upon transmission, it provides the payload of the PHY layer (known as the MPDU or Mac Protocol Data Unit). For Layer 3, upon transmission, it provides a Layer 2 frame structure around the IP packet, for delivery to a Layer 2 peer using the Physical layer (Layer 1). Additionally, for some networks, Layer 2 provides some control and/or management functions that do not require Layer 3 communications. This is true on Wi-Fi networks as well as other wireless IoT protocol networks. For example, a frame may be generated at Layer 2 for communications with an access point or gateway to establish connectivity, determine capabilities, announce future communications, and more. All this, without ever traversing layers 3 through 7. The IEEE has divided the Data Link layer into two sublayers, the Logical Link Control (LLC) sublayer, and the Medium Access Control (MAC) sublayer. The LLC sublayer is not actually used by many transport protocols, such as TCP. The varied IEEE standards identify the behavior of the MAC sublayer within the Data Link layer and the PHY layer as well. The results of the processing in Layer 2, on transmissions, are that the packets from Layer 3 become a Layer 2 frame that is ready to be transmitted by the Physical layer or Layer 1. Remember, this is just the collection of terms that we use; the data is a collection of ones and zeros all the way down through the OSI layers. Each layer is simply manipulating or adding to these ones and zeros to perform that layer's service. The services and processes within the Data Link layer are named after the layer and are called data-link entities. Key components to remember of the Data Link layer for wireless technologies are:
The Network Layer is defined as providing the functional and procedural means for connectionless-mode (UDP) or connection-mode (TCP) transmission among transport entities and, therefore, provides to the transport entities independence of routing and relay considerations. In other words, the Network layer says to the Transport layer, "You just give me the segments you want to be transferred and tell me where you want them to go. I'll take care of the rest." This nature is why routers do not usually have to expand data beyond Layer 3 to route the data correctly. For example, an IP router does not care if it's routing an email message or a voice conversation in most cases. It only needs to know the IP address for which the packet is destined and any relevant Quality of Service (QoS) parameters to move the packet along. Examples of Network layer protocols and functions include IP, ICMP, and IPSec. The Internet Protocol (IP) is used for addressing and routing of data packets in order to allow them to reach their destination. That destination can be on the local network or a remote network. The origination machine or device is typically not concerned with the actual location of the destination, with the exception of the required knowledge of an exit point, or default gateway, from the origination machine's network. The Internet Control Message Protocol (ICMP) is used for testing the TCP/IP communications and for error message handling within Layer 3. Finally, IP Security (IPSec) is a solution for securing IP communications using authentication and/or encryption for each IP packet. While security protocols such as SSL, TLS, and SSH operate at Layers 4 through 7 of the OSI model, IPSec sits solidly at Layer 3. The benefit is that, since IPSec sits below Layer 4, any protocols running at, or above Layer 4 can take advantage of this secure foundation. For this reason, IPSec has become more and more popular since it was first defined in 1995. The services and processing operating in the Network layer are known as network entities. These network entities depend on the services provided by the Data Link layer. At the Network layer, Transport layer segments become packets. These packets will be processed by the Data Link layer. For the purposes of this certification, you will need to understand which protocols are commonly used and how to troubleshoot these. The most common Network Layer protocols to run over wireless technologies are:
Layer 4, the Transport Layer is defined as providing transparent transfer of data between session entities and relieving them from any concern with the detailed way in which reliable and cost-effective transfer of data is achieved. This simply means that the Transport layer, as its name implies, is the layer where the data is segmented for effective transport in compliance with Quality of Service (QoS) requirements and shared medium access. Examples of Transport layer protocols and functions include TCP and UDP. The Transmission Control Protocol (TCP) is the primary protocol used for the transmission of connection-oriented data in the TCP/IP suite. HTTP, SMTP, FTP, and other important Layer 7 protocols depend on TCP for reliable delivery and receipt of data. The User Datagram Protocol (UDP) is used for connectionless data communications. For example, when speed of communications is more important than reliability, UDP is frequently used. Because voice packets either have to arrive or not arrive (as opposed to arriving late), UDP is frequently used for the transfer of voice and video data. TCP and UDP are examples of transport entities at Layer 4. These transport entities will be served by the Network layer. At the Transport layer, the data is broken into segments if necessary. If the data will fit in one segment, then the data becomes a single segment. Otherwise, the data is segmented into multiple segments for transmission. For the purposes of this certification, you will need to understand which protocols are commonly used and how to troubleshoot these. The most common Transport Layer to run over wireless technologies are:
The Session layer is defined in sub-clause 7.3 of the OSI Reference Model as providing the means necessary for cooperating presentation entities to organize and to synchronize their dialog and to manage their data exchange. This is accomplished by establishing a connection between two communicating presentation entities. The result is simple mechanisms for orderly data exchange and session termination. A session includes the agreement to communicate and the rules by which the communications will transpire. Sessions are created, communications occur, and sessions are destroyed or ended. Layer 5 is responsible for establishing the session, managing the dialogs between the endpoints, and the proper closing of the session. Examples of Session layer protocols and functions include the iSCSI protocol, RPC and NFS. iSCSI is a protocol that provides access to SCSI devices on remote computers or servers. The protocol allows SCSI commands to be sent to the remote device. The Remote Procedure Call (RPC) protocol allows subroutines to be executed on remote computers. A programmer can develop an application that calls the subroutine in the same way as a local subroutine. RPC abstracts the network layer and allows the application running above Layer 7 to execute the subroutine without knowledge of the fact that it is running on a remote computer. The Network File System (NFS) protocol is used to provide access to files on remote computers as if they were on the local computer. NFS actually functions using an implementation of RPC known as Open Network Computing RPC (ONC RPC) that was developed by Sun Microsystems for use with NFS; however, ONC RPC has also been used by other systems since that time. Remember that these protocols are provided only as examples of the protocols available at Layer 5 (as were the other protocols mentioned for Layers 6 and 7). By learning the functionality of protocols that operate at each layer, you can better understand the intention of each layer. The services and processes running in Layer 5 are known as session entities. Therefore, RPC and NFS would be session entities. These session entities will be served by the Transport layer. For the purposes of this certification, you will need to understand which protocols are commonly used and how to troubleshoot these. The most common Session Layer to run over wireless technologies are:
The Presentation layer is defined in sub-clause 7.2 of the OSI Reference Model as the sixth layer of the OSI model and it provides services to the Application layer above it and the Session layer below it. The Presentation layer, or Layer 6, provides for the representation of the information communicated by or referenced by application entities. The Presentation layer is not used in all network communications and it, as well as the Application layer and Session layer, is similar to the single Application layer of the TCP/IP model. The Presentation layer provides for syntax management and conversion, as well as encryption services. Syntax management refers to the process of ensuring that the sending and receiving hosts communicate with a shared syntax or language. When you realize this, you will realize why encryption is often handled at this layer. After all, encryption is really a modification of the data in such a way that must be reversed on the receiving end. Therefore, both the sender and receiver must understand the encryption algorithm in order to provide the proper data to the program that is sending or receiving on the network. Don't be alarmed to discover that the TCP/IP model has its own Application layer that differs from the OSI model's Application layer. The TCP/IP protocol existed before the OSI model was released. For this reason, we relate the TCP/IP protocol suite to the OSI model, but we cannot say that it complies with the model directly. It's also useful to keep in mind the reality that the TCP/IP protocol is an implemented model and the OSI model is only a "reference" model. Examples of Presentation layer protocols and functions include any number of data representation and encryption protocols. For example, if you choose to use HTTPS instead of HTTP, you are indicating that you want to use Secure Sockets Layer (SSL) encryption. SSL encryption is related to the Presentation layer or Layer 6 of the OSI model. SSL, the Netscape solution, and TLS, the IETF solution, both operate at Layer 6 of the OSI model. Ultimately the Layer 6 is responsible, at least in part, for three major processes:
Data representation is the process of ensuring that data is presented to Layer 7 in a useful way and that it is passed to Layer 5 in a way that can be processed by the lower layers. Data security usually includes authentication, authorization, and encryption. Authentication is used to verify the identity of the sender and receiver. With solid authentication, we gain a benefit known as non-repudiation. Non-repudiation simply means that the sender cannot deny the sending of data. This is often used for auditing and incident-handling purposes. Authorization ensures that only valid users can access the data and encryption ensures the privacy and integrity of the data as it is being transferred. The processes running at Layer 6 are known as presentation entities in the OSI model documentation. Therefore, an application entity is said to depend on the services of a presentation entity and the presentation entity is said to serve the application entity.
The seven layers of the OSI model that we have and are discussing are defined in clause 7 of the document ISO/IEC 7498-1. The Application layer is defined in sub-clause 7.1 as the highest layer in the reference model and as the sole means of access to the OSIE (Open System Interconnection Environment). The Application Layer is the layer that provides access to the other OSI layers for applications, and to applications for the other OSI layers. Do not confuse the Application layer with the general word "application," which is used to reference programs like Microsoft Excel, Corel WordPerfect and so on. The Application layer is the OSI layer that these applications communicate with when they need to send or receive data across the network. You could say that the Application layer exposes the higher-level protocols that an application needs to talk to. For example, Microsoft Outlook may need to talk to the SMTP protocol in order to transfer email messages. Examples of Application layer protocols and functions include HTTP, FTP, and SMTP. The Hypertext Transfer Protocol (HTTP) is used to transfer HTML, ASP, PHP and other types of documents from one machine to another. It is the most heavily used Application layer protocol on the Internet and, possibly, in the world. The File Transfer Protocol (FTP) is used to transfer binary and ASCII files between a server and a client. Both the HTTP and FTP protocols can transfer any file type. The Simple Mail Transport Protocol (SMTP) is used to move email messages from one server to another and usually works in conjunction with other protocols for mail storage. Application layer processes fall into two general categories: user applications and system applications. Email (SMTP), file transfer (FTP), and web browsing (HTTP) functions fall into the user application category as they provide direct results to applications used by users such as Outlook (email), WS_FTP (file transfer), and Firefox (web browsing). Notice that the applications or programs used by the user actually take advantage of the application services in the Application layer or Layer 7. For example, Outlook takes advantage of SMTP. Outlook does not reside in Layer 7, but SMTP does. As examples of system applications, consider DHCP and DNS. The Dynamic Host Configuration Protocol (DHCP) provides for dynamic TCP/IP configuration and the Domain Name Service (DNS) protocol provides for name-to-IP-address resolution. Both of these are considered system-level applications because they are not usually directly accessed by the user (though this is open for debate since administrators are users too and they use command-line tools or programs to directly access these services quite frequently). The processes operating in the Application layer are known as application-entities. An application entity is defined in the standard as an active element embodying a set of capabilities which is pertinent to OSI and which is defined for the Application layer. Application entities are the services that run in Layer 7 and communicate with lower layers while exposing entry points to the OSI model for applications running on the local computing device. SMTP is an application entity as is HTTP and other Layer 7 protocols. For the purposes of this certification, you will need to understand which protocols are commonly used and how to troubleshoot these. The most common Application Layer protocols to run over wireless technologies are:
In summary, wireless networks are related to the OSI Model in three ways:
Now that we have looked at the OSI Model, let's take a look at several of the components involved in a wireless solution in a little more detail and where they reside within this model. Our areas of focus will be as follows:
Physical connectivity (Layer 1)
LAN Networking Requirements
Hardware in use (Layer 1)
Implementing Wireless Solutions
All wireless solutions need some wires at one point or another. Historically, cabling has evolved more slowly than any telecommunications technology but in the last few years, with the need for higher transmission speeds and the evolution of power over Ethernet, it is important that you learn the various capacities of today's cabling. Please refer to Table 1.1 to see the capabilities of each cable category.
Ethernet Standard | Speed | Maximum Length | Minimum Cable Category |
---|---|---|---|
10BASE-T | 10 Mbps | 100 meters | CAT-3 |
100BASE-TX | 100 Mbps | 100 meters | CAT-5 or higher |
1000BASE-T | 1 Gbps | 100 meters | CAT-5 or higher |
2.5GBASE-T | 2.5 Gbps | 100 meters | CAT-8 |
5GBASE-T | 5 Gbps | 100 meters | CAT-6 |
10GBASE-T | 10 Gbps | 100 meters | CAT-6A |
25GBASE-T | 25 Gbps | 30 meters | CAT-8 |
40GBASE-T | 40 Gbps | 30 meters | CAT-8 |
Many wireless devices are installed in ceilings or under raised floors. These sites are referred to in building codes as plenum. It is usually forbidden to power devices installed in plenums through a standard electrical outlet. This is why power over Ethernet was invented. Although initially PoE was based on vendor-specific implementations, the Institute for Electrical and Electronics Engineers (IEEE) has ratified the following standards:
Each wireless standard has its own PHY rates. Increasingly, many of these are starting to look similar in nature. Within 802.11ax many concepts were carried over from cellular technologies (OFDMA, Resource Units, etc.) to make them more efficient. We go into more detail about PHY rates and support later in this book; however, some key things to know are that data rates are:
As mentioned above, multiple pieces of electronic components and antennas are required for wireless communications. These typically fall into one of 4 categories: wireless antennas, wireless base stations, wireless clients, and wireless control systems.
We will cover antennas in much more depth in Chapters 4 and 5. Antennas are the physical device that takes the signal from the radio and transmits it over the air. APs will use either internal or external antennas. The antenna has no manageable configuration on its own: it is a device which is made to meet the criteria required to be able to emit or receive energy in a specific band. There are many characteristics to antennas, but for the purpose of this chapter, you should remember that an antenna is a passive element: it can receive and transmit RF energy, may shape the way the RF energy is emitted due to its physical properties, but it cannot be configured beyond that.
The wireless base station is transmitting infrastructure that connects wireless clients to the larger network, be it via a repeater, mesh, or a wired connection. In Wi-Fi these are referred to as Access Points. In cellular you may hear terms like nano cell, picocell, etc. These and many others are referring to different types of wireless base stations. It is a good idea to understand the types of wireless infrastructure link technologies you will hear about as well. Wireless links can be point-to-point, mesh, or point-to-multipoint networks. In many cases single cells are a single wireless base station to the network that service wireless clients. A point-to-point or point-to-multipoint network creates a network of base stations to either extend a link wirelessly (ex: between two buildings) or share some form of resource. Whereas, a mesh network is the expanded version of a point-to-multipoint network: it extends wireless services to clients through a point-to-multipoint network which connects them to the larger network. Typically, this type of network consists of a root base station and a non-root or mesh wireless base station. The clients connect to the mesh base station and the base station device backhauls traffic wirelessly to the root. This type of network is becoming popular in homes where the coverage of a single AP is not enough, but it is also heavily used in the enterprise, industry, and other installations. Wireless mesh topologies: this term refers to the layout of a network. Common layouts or architectures used in wireless technologies are full mesh networks and star networks. Full and partial mesh: common in Zigbee and 802.11 deployments, a mesh topology uses the notion of a web of devices which are all linked together. In the last few years the increase in use for BLE IoT devices has also brought forward the need for a BLE mesh topology. In addition, in standards beyond Bluetooth 4.0, mesh topologies will also be supported on classic Bluetooth implementations. Star: common in wireless networks called Independent basic service set (IBSS) and BLE, the star topology's center often called a "smart hub" will allow devices to be connected and managed through a single point. Wireless technologies could be classified by their coverage area, from largest to smallest: A wireless wide area network is very large. Think of it as the coverage offered by a mobile phone provider. It can extend from a city to a region or state. Examples of such technologies are LTE, GSM, and WiMAX. The mobile phone industry is the primary operator of such networks which use licensed frequencies. Spectrum allocation is managed by local RF regulatory agencies (ex: the FCC, ETSI or CRTC). The capabilities of these networks will depend on the area and the technologies in use.
Wireless clients are devices with radios that are connecting to the wireless base stations. Some use Wi-Fi connections based on the 802.11 standard, and others are based on other technologies such as Bluetooth, BLE, Zigbee, LoRa, or Wi-Fi direct. Wireless clients will be covered throughout this book, and they include any device that is not serving other devices. They can be laptops, desktops, robots, wearables or IoT. Regardless of the protocol which they are using to communicate, they share a few characteristics. Most are using drivers as well as algorithms to dictate their behavior which is proprietary. Due to the cost of constant recalibration, most of these devices are not using calibrated wireless adapters. Though you may think that client devices are dependent on the wireless infrastructure, in reality it is the opposite. Should you be interested in learning more about wireless adapters in use for Wi-Fi purposes, you may take a look at the compilation made by Mike Albano at clients.mikealbano.com In the health vertical, BLE allows for wireless patient monitoring. This enhances patient mobility. It can also track assets, which are a huge expense in hospitals. Networks of BLE beacons, small, battery-operated transmitters, can provide useful information on the traffic patterns in warehouses and stores. BLE is also behind keyless entry and engine ignition in cars. It can power lights on or off, a classic IoT scenario which can both benefit the environment and save money to businesses. It can lock the door to your house. The possibilities for IoT devices are endless. Should your network support or have to support any such technologies, you should be aware of the additional dependence this brings on the wireless network. Once the life of a patient depends on getting a medical device located on time, it becomes very important to maintain your network up to date, secure and in the best shape possible. This is what brings us to the topic of technology watches.
In enterprise Wi-Fi networks, wireless controllers are common. These devices control and manage the wireless base stations, keeping configurations and code up to date; they also control inter-base station communications. Not all wireless networks use separate control systems as this can run on the wireless base station in many cases. The two most common cases for wireless networks are a centralized or distributed control model. A centralized wireless control system is a network where a piece of equipment called a controller (or group of controllers) is managing the configuration of the entire network. The clients' connections are established as tunnels to the controller through the AP. The APs get a configuration from the WLC and is then called a Thin AP. A distributed wireless control system features autonomous wireless base stations. In Wi-Fi terms, this is where you will find your "thick" AP, with full configuration and decision power, authentication, switching, etc. In larger distributed environments, a wireless network management system (WNMS) is used to administer the network as a whole from a single pane of glass. Hybrids of the centralized topology exist with many vendors to accommodate for deployments with a lot of remote sites which have Internet access yet could use some form of data tunneling and disaster recovery in case the connection to the central WLC fails. Such wireless control systems, though mostly cloud-based, are making their way into wireless IoT networks in general, regardless of the wireless protocol in use. For example, Monnit devices can be controlled, managed, upgraded, etc. through the iMonnit cloud service. These services do not act as traditional Wi-Fi controllers, but more like traditional Wi-Fi cloud-managed services.
Network services coming from the LAN are essential for functioning wireless clients. The typical network connection involves a successful connection to the wireless network, and then the client will need the correct network credentials to do anything on the network. For devices to be able to access the network, it is very common for deployments to use Dynamic Host Configuration Protocol (DHCP) servers to automatically grant IP addresses to clients. When IPv6 is used, autoconfiguration for local IPv6 addresses is often used. If the devices must communicate with the globally IPv6 network, a global IPv6 address will be required as well. A DHCPv6 service may be used in either case. You can use DHCP services to provision domain name servers (DNS) to the end devices, which will be used to resolve names, such as the ones you may type in a browser URL, to IP addresses. Last, but not least, you may require some form of Network Time Protocol server (NTP). These services are common across almost all IP. As a CWISA, you are less likely to be involved in direct requirements engineering and network design, but are very involved in network deployment, maintenance, and upgrades. Therefore, understanding the network services required by the IoT solution is essential.
Most wireless systems make use of some form of authentication. Without going into too many details, there are three ways in which wireless users can be authenticated. Open authentication, which allows anyone to connect. The second is a pre-shared key that both systems have configured. The third is EAP based authentication. Most enterprise-grade Wi-Fi networks use EAP authentication using the SIM, certificates and/or usernames and passwords. Enterprise-grade Wi-Fi authentication often relies on the RADIUS standard, a non-proprietary standard which is available in solutions both sold through manufacturers or as open-source code. These systems are often referred to as AAA servers. AAA stands for authentication, authorization, and accounting. In addition to authentication, some wireless networks may make use of ACLs, DMZ, and VRFs. An ACL, or access control list, is a security measure to define what types of traffic is allowed or denied. The network traffic is matched, sequentially, to a list of protocols, IP addresses, MAC addresses, ports, and other such criteria. When a match is found, the sequential process stops, and the traffic is either allowed or denied. A DMZ, or demilitarized zone, is a part of the network which is not protected. A VRF is a virtual instance of a network which is often used to isolate traffic which the network administrator wants to segregate, such as guest traffic. It is always preferable to use VRFs in a DMZ for those users which you wish to grant access to the Internet yet not allowed to use your internal services.
Building a lab and testing the network is the best way to avoid problems in production environments. Depending on your use cases and business use of the network, your lab requirements may vary. If your business can afford to have devices which are identical to those which are found on your infrastructure, installing them in the lab and running the same software version and a basic configuration fitting for your installations is a time-saver. Not only will it allow you to quickly roll out spares which only need a quick scrub-configuration-reload in case of an outage, but it will also allow you to gain more experience with your infrastructure without impacting clients, or replicate problems in a non-production environment. Should it be impossible for you to set up a lab with real gear, several virtual options exist. The most obvious virtual machines, VMWare, could allow you to spin an instance of just about any flavor of Windows, Linux or prepackaged VM platforms offered by manufacturers such as Cisco and Aruba. Systems such as RADIUS servers, DNS, DHCP, and others could be tested in this way. As for networking gear, GNS3 and EveNG are open-source options one can look into, which would allow creating a lab environment simulating various devices from a plethora of manufacturers. You should be aware that there are differences between network simulators and emulators. A simulator, such as Eve-NG, allows you to create a virtual topology comprised of several devices which you will have to configure. These devices will behave like real devices, but you will not be able to test traffic patterns, load, or any other such traffic conditions. If you would like to create a proof of concept of new infrastructure or modification of your network deployment, the first place to start would be either a real lab scenario or a network simulator scenario. A network emulator will allow you to lab-specific conditions on your infrastructure. The emulation of specific types of traffic or network conditions such as a broadcast storm or DDOS becomes possible with an emulator, or through other traffic generating scenarios as detailed in Common Vulnerabilities and Exposures (CVE) alerts and their related papers. Spirent, Ixia, and NetSim by Tetco are examples of network emulators you may want to have in your lab. Any lab test should begin by creating a plan, which identifies:
Depending on the size of the organization and scale of the network, many large-scale environments utilize a staging environment between the lab and production deployments. The purpose of a staging environment is to move a small segment of production traffic to this environment to monitor closely for problems prior to the new configuration going into production. In a large enterprise, service providers, and in carrier networks, these are required. For smaller entities with on-site support staff sometimes things are just rolled into production. Nowadays, a large percentage of telecommunications outages are attributable to human error. Many strategies are now used to avoid such risk. Among them, maintenance windows, testing procedures, and proper implementation practices are most typical. Therefore, you should always carefully plan any change, patch, or code update before applying this change to your production network, the lab to staging approach accomplishes this. It will help you in making your maintenance window more productive and bringing down the risk of causing issues which would cost your business and impact productivity.
A systematic approach is recommended for implementing and supporting your wireless solutions. If you are a newcomer to the world of professional networking, you may not yet be aware of the impact "your layer" can have on every other aspect of running a business. Nowadays, with the number of technologies which are part of our lives, very few businesses can run without network services for a very long time. The phones, asset tracking, email, point of sale terminals (POS) and everything which lives in the cloud is dependent on the links networks provide. Adopting a systematic methodology to carry forward adds, moves, and changes, roll out updates and perform system maintenance is very important. Documentation along the way is an essential part of the systematic approach. The first step is to ensure you have proper documentation for all your infrastructures. This documentation should include:
Getting in the habit of following security advisories and bugs is a practice that can save you valuable time and energy, especially in network administration roles. You should aim at spending some time every week on a technology watch which includes sites such as the following:
Three types of organizations guide the wireless industry. They are regulation, standardization and compatibility/certification. The Federal Communications Commission (FCC) and the European Telecommunications Standards Institute (ETSI) are examples of regulatory bodies that provide regulations in North America and Europe, respectively. The Institute of Electrical and Electronics Engineers (IEEE) and the Internet Engineering Task Force (IETF) are examples of standards development organizations. The Wi-Fi Alliance is a compatibility testing and certification group. Other wireless technologies have similar groups, and we will highlight these below. It is essential to understand what these organizations do, and it is also essential to understand how they work together. For example, consider the interdependency between the FCC and the IEEE, or the relationship between the Wi-Fi Alliance and the IEEE. The FCC sets the legal boundaries within which the IEEE standards may function. The Wi-Fi Alliance tests equipment based on portions of IEEE standards and certifies it as being interoperable. These three organizations provide regulation, standardization and compatibility services for wireless Local Area Network (WLAN) technologies within North America. The benefits of these organizations to the consumer are clear and are depicted in Figure 1.5. When regulations are in place, such as power output limits, it is possible to implement local wireless networks with less interference from nearby networks. When standards are in place, like the IEEE 802.11 standard, it is possible to purchase devices that are compatible even though they come from different vendors. When certifications are in place to validate interoperability, consumers may buy products with confidence that those devices sharing the same certifications should be interoperable and fewer man-hours will be required for compatibility testing.
A regulatory domain is defined as a geographically bounded area that is controlled by a set of laws or policies. Currently, governing bodies exist at the city, county, state, and country-level within the United States forming a hierarchical regulatory domain system. In other countries, governments exist with similar hierarchies or with a single-level of authority at the top level of the country or group of countries. In many cases, these governments have assigned the responsibility of managing communications to a specific organization that is responsible to the government. In the United States, this managing organization is the Federal Communications Commission. In the UK, it is the Office of Communications. In Australia, it is the Australian Communications and Media Authority. The following sections outline four such governing bodies and the roles they play in the wireless networking industry of their respective regulatory domains.
The Federal Communications Commission (FCC) was born out of the Communications Act of 1934. Charged with the regulation of interstate and international communications by radio, television, cable, satellite, and wire, the FCC has a large body of responsibility. The regulatory domain covered by the FCC includes all 50 of the United States as well as the District of Columbia and other U.S. possessions, like the Virgin Islands and Guam. In Canada, the Industry Canada (IC) organization certifies RF products for use. The CE mark is given by the European Commission, which allows for RF products to be sold in the European Union.
The Office of Communications (OfCom) is charged with ensuring optimal use of the electromagnetic spectrum, for radio communications, within the UK. OfCom provides documentation of and forums for discussion of valid frequency usage in radio communications. The regulations put forth by the OfCom are based on standards developed by the European Telecommunications Standards Institute (ETSI). These two organizations work together in much the same way the FCC and IEEE do in the United States.
In Japan, the Ministry of Internal Affairs and Communications (MIC) is the governing body over radio communications. However, the Association of Radio Industries and Businesses (ARIB) was appointed to manage the efficient utilization of the radio spectrum by the MIC. In the end, ARIB is responsible for regulating which frequencies can be used and such factors as power output levels.
The Australian Communications and Media Authority (ACMA) replaced the Australian Communications Authority in July of 2005 as the governing body over the regulatory domain of Australia for radio communications management. Like the FCC in the United States, the ACMA is charged with managing the electromagnetic spectrum to minimize interference. This is done by limiting output power in license-free frequencies, and by requiring licenses in some frequencies.
The International Telecommunications Union - Radiocommunication (ITU-R) is a Sector of the International Telecommunications Union (ITU). The ITU, after an extended history, was designated as a United Nations specialized agency on October 15, 1947. The constitution of the ITU has stated its purposes as:
The ITU-R, specifically, maintains a database of the frequency assignments worldwide and helps coordinate electromagnetic spectrum management through five administrative regions. These five regions are:
Each region has one or more local regulatory groups such as the FCC in Region A for the United States or the ACMA in Region E for Australia. Ultimately, the ITU-R provides the service of maintaining the Master International Frequency Register of 1,265,000 terrestrial frequency assignments.
In the end, regulatory agencies typically control wireless use in unlicensed spaces in the following important areas:
The Institute of Electrical and Electronics Engineers (IEEE) states its mission as being the world's leading professional association for the advancement of technology. They provide standards and technical guidance for more than just the wireless industry. In this section, I focus on the specific standards developed by the IEEE that impact and benefit wireless networking. These standards include wireless-specific standards, as well as standards that have been implemented in the wired networking domain, which are now being utilized in the wireless networking domain. First, I provide you with a more detailed overview of the IEEE organization.
The IEEE is a global professional society with more than 423,000 members in 160 countries. The constitution of the IEEE defines the purpose of the organization as scientific and educational, directed toward the advancement of the theory and practice of electrical, electronics, communications and computer engineering, as well as computer science, the allied branches of engineering, and the related arts and sciences. Their mission is stated as promoting the engineering process of creating, developing, integrating, sharing, and applying knowledge about electro and information technologies and sciences for the benefit of humanity and the profession. Ultimately, the IEEE creates many standards for many niche disciplines within electronics and communications. In this book, the focus is on computer data networks and specifically wireless computer data networks. In this area, the IEEE has given us the 802 project and, specific to wireless, the IEEE 802.11 standard.
The Internet Engineering Task Force (IETF) develops standards for Internet and Internet-related technologies. Of course, their most famous standards of management are the IP protocols, including IPv4, IPv6, TCP, UDP, ICMP, and more. The IETF standards are developed through a Request for Comments (RFC) process. The input to the process is more flexible than that of the IEEE, but the results have been exceptional over the years.
Important IETF standards related to IoT networks include:
According to 3GPP:
From Bluetooth SIG:
From CBRS Alliance:
From CTIA:
From GSMA:
From Wi-Fi Alliance:
From WBA:
According to the WiMAX Forum (http://wimaxforum.org/), they are:
WiMAX Forum Strategic Objectives
According to the CSA (http://www.csa-iot.org/), they are:
The LoRa Alliance is an open, nonprofit association that has become one of the largest and fastest-growing alliances in the technology sector since its inception in 2015. Its members closely collaborate and share experiences to promote and drive the success of the LoRaWAN standard as the leading open global standard for secure, carrier-grade IoT LPWAN connectivity. With the technical flexibility to address a broad range of IoT applications, both static and mobile, and a certification program to guarantee interoperability, LoRaWAN has already been deployed by major mobile network operators globally, with continuing wide expansion into 2022 and beyond.
The organization is focused on the enhancements and growth of the LoRa and LoRaWAN standards and protocols.
This chapter provided a broad overview of wireless networks and an introduction to the industry organizations that are shaping the future of wireless technologies. You were introduced to many topics that will be expanded upon throughout the remainder of this book. The next chapter will go deeper into the various wireless network types, such as WBAN, WLAN, WMAN, and more. It will also address various vertical markets and their use of wireless solutions.
Objectives Covered:
The Internet of Things (IoT) was added as a term to the Merriam-Webster dictionary back in 2017. This goes to show that IoT is a term that is expected to be used for a long time and that it has crossed the pre-requisite of it having been used for quite some time, gained a widespread presence, and has been written about in different forms from different sources. That's why it was added to the dictionary, and here within lies the argument towards defining what the Internet of Things means.
This chapter will define IoT and explain the many solutions available for its implementation. The solutions range from theoretical models to actual protocol implementations. While IoT has been addressed many times already throughout this book, this chapter will bring all of the various concepts together.
The IoT has gained popularity ever since it was first coined (more about that in the following section) to an extent that it became one of the biggest buzzwords without necessarily being referred to with the same meaning. In addition, many sources discuss IoT from different perspectives including its hardware, software, security, communication, use-cases, architecture, business outcomes, forecast, and others that also show IoT in a different perspective.
The Merriam-Webster itself approaches IoT to define it practically as "something that's promising to make all kinds of tedious tasks go down more smoothly with information being sent to and received from household objects and devices — say, your bedroom fan or your toaster oven — using the Internet."
Dissecting the term, we can see that it has to do with connectivity and connected things, or connected objects (CO). But how does that make it different than the Internet itself? Is a personal computer connected to the Internet considered an IoT CO? What about mobiles? IoT has certainly been around for a while and has grown fast enough to create this blur, which might make it difficult to define.
As put by B. Russell and D. Van Duren in Practical Internet of Things Security, the Internet of Things is far more than just mobile or computer connectivity, and as a technical book, we can look into how the Institute of Electrical and Electronics Engineers (IEEE) as well as the International Telecommunications Union (ITU), who set technical communication standards, define IoT:
ITU Definition:
"A global infrastructure for the information society, enabling advanced services by interconnecting (physical and virtual) things based on existing and evolving, interoperable information and communication technologies."
IEEE Small Environment Definition:
"An IoT is a network that connects uniquely identifiable 'things' to the Internet. The 'things' have sensing/actuation and potential programmability capabilities. Through the exploitation of the unique identification and sensing, information about the 'thing' can be collected and the state of the 'thing' can be changed from anywhere, anytime, by anything."
IEEE Large Environment Definition:
"Internet of Things envisions a self-configuring, adaptive, complex network that interconnects things to the Internet through the use of standard communication protocols. The interconnected things have physical or virtual representation in the digital world, sensing/actuation capability, a programmability feature, and are uniquely identifiable. The representation contains information including the thing's identity, status, location, or any other business, social or privately relevant information. The things offer services, with or without human intervention, through the exploitation of unique identification, data capture and communication, and actuation capability. The service is exploited through the use of intelligent interfaces and is made available anywhere, anytime, and for anything taking security into consideration."
The ITU refines the definitions for IoT further in their ITU Internet Reports 2005: The Internet of Things, as being a new dimension added to the world of ICT which evolves "anytime, any place connectivity for anyone," to "connectivity for anything." Connections will multiply and create an entirely new dynamic network of networks — an Internet of Things.
These definitions overlap and complement each other by including anything physical or logical that can be connected to other things over a diversely connected world. However, they also miss the key factor, in some cases, wherein IoT does not always connect to the Internet.
Here's a case-in-point: You have one hundred sensors connected to a gateway using the 802.15.4 protocol and MQTT for sensor reading transmissions. A service subscribes to the sensor readings from all sensors and analyzes the data in real-time, generating alerts and taking actions as required. The service then stores the data in long term storage using a MongoDB database. Analysts can look at the historical data and generate reports for decision analysis. The sensors and the gateway are on the local network. The MQTT server is on the LAN as well and so is the MongoDB database. The business applications run on a web server also located on the LAN. None of the services, applications, or devices connect to or require a connection to the Internet, and yet there is no question that the described solution is an IoT wireless sensor network providing IoT data for analysis and processing.
Being discussed and cited in multiple references, IoT is also referred to as:
Ultimately, CWNP has defined IoT based on historic use of the term, current use, and the practical way IoT is deployed, in the following way:
The Internet of Things (IoT) is the interconnection of things (physical and virtual, mobile and stationary) using connectivity protocols and data transfer protocols that allow for monitoring, sensing, actuation and interaction with and by the things at any time and in any location.
This is the official CWNP definition of IoT.
As you can see, several definitions of the Internet of Things result in defining the term IoT a bit vague. However, this is not the only indefinite aspect of the term! Even the numbers surrounding statistics about the growth of IoT or the total number of connected devices are varied between different IoT reference books, research institutes, IT vendors or evangelists. The rapid evolution of IoT might have contributed to this variation over time of the reported growth. More likely, it is because different organizations performing research include different components as IoT.
To help give more precision on IoT, at least to the definition itself, we have to go back in history as a frame of reference to see how IoT was shaped and how it, in turn, is helping shape the industry.
"The term IoT can most likely be attributed to Kevin Ashton in 1997 with his work at Procter and Gamble using RFID tags to manage supply chains. The work brought him to MIT in 1999, where he and a group of like-minded individuals started the Auto-ID Center research consortium."
IoT has evolved a lot since being a catchy definition for an RFID system to a large ecosystem and industry that will have a potential impact sized between $4 and $11 trillion on the markets based on 2015 reports. More recent reports indicate that the market size of IoT will be about $1.1 trillion by 2024 or $650 billion by 2026, again showing the difference in included solutions as IoT versus non-IoT. However, another report predicts that just the Industrial IoT (IIoT) market size alone will be $570 billion by 2024 and $1.74 trillion by 2030.
Clearly, someone is wrong on their predictions, but even the low-end predictions tell a large growth story.
Technology is acting as the catalyst to change the way we do things around to the extent that it's affecting the economy, society, businesses, and even individuals. We are experiencing shifts across all business models, including those in enterprise, municipal, industrial and consumer markets alike.
Revolutions have occurred throughout the system that has triggered deep changes in economic systems and social structures. In the historical frame-of-reference, the industrial revolutions can be considered very recent, with the first industrial revolution spanned from around the mid of the 18th century and for about a century. Triggered by the construction of railroads and the invention of the steam engine, it ushered in mechanical production. The second industrial revolution started in the late 19th century and into the early 20th century, made mass production possible, fostered by the advent of electricity and the assembly line. The third industrial revolution began in the 1960s, usually called the computer or digital revolution because it was catalyzed by the development of semiconductors, mainframe computing (1960s), personal computing (1970s and 1980s), and the Internet (1990s).
Mindful of the various definitions, today we are at the beginning of a 4th industrial revolution which began at the turn of the 21st century and builds on the digital revolution. Characterized by a much more ubiquitous and mobile Internet, smaller and more powerful sensors that are cheaper, and driven by AI and ML. Digital technology has fundamentally transitioned from being driven by its hardware, then software and networking cores to the more sophisticated and integrated market-driven platforms, and are, as a result, transforming societies and global economy with a break from the 3rd industrial revolution. Industry 4.0 is a term coined to lead discussions describing how this will revolutionize the organization of global value chains.
At the core of it, the 4th industrial revolution, you will see the smart connected machines and systems. This is our IoT building block, with the culmination of technology in all the previous industrial revolutions gearing us towards it. So IoT aims to help as a building block of digitization, or the evolution of the result of the third Industrial Revolution, the Internet, get into its next phase.
In addition to this, the more recent buzz-phrase is Industry 5.0, which does not necessarily introduce new technology per se but changes the heart of technology. The point of Industry 5.0 is to implement technology for conservation of resources, reduction in energy consumption, and improvement of safety and health for people. So it is taking the artificial intelligence, IoT, networking, security, and other technologies and using them for the long-term benefit of the world.
"Industry 4.0 is the complete transformation of the entire scope of industrial production through the fusion of internet and digital technology with traditional industry, being motivated by three major changes in the productive industrial world related to the immense amount of digitized information, exponential advancement of computer capacity, and innovation strategies (people, research, and technology)."
With the historical-frame-of-reference that we have started with, some key dates provided a contribution to defining what IoT is. Many discussions and forums on the Internet argue that IoT has been developing for some time following the 3rd industrial revolution. There are many milestones and dates that are significant pivot points in the history of IoT that have fast-tracked hardware changes, software introductions, connectivity, and the birth of whole new platforms that in turn have bolstered the advancement of IoT.
Consider the following dates:
A more recent and general definition specifies the IoT as:
"A vision of a world in which billions of objects with embedded intelligence, communication means, and sensing and actuation capabilities will connect over IP networks."
This is all happening and will continue to evolve due to the interconnection of seemingly disjointed intranets with strong horizontal software capabilities. Evolution is a very important keyword here.
To come up with a simple definition for IoT, we can identify IoT as enabling the utilization and transmission of data from devices (sensors, controllers, actuators) that have been enabled with connectivity and having constrained requirements (size, processing, memory) and that require efficient utilization of resources to communicate with other devices, humans and systems, in order to enable informative decisions supporting business objectives.
And remember, the official CWNP definition:
The Internet of Things (IoT) is the interconnection of things (physical and virtual, mobile and stationary) using connectivity protocols and data transfer protocols that allow for monitoring, sensing, actuation and interaction with and by the things at any time and in any location.
Certainly, IoT devices may connect to services across the Internet, but they do not have to.
A 2019 paper titled The Internet of Things: Overview & Analysis by Dr. Sunil Taneja defines IoT strikingly without even mentioning the Internet in the first paragraph:
"The term IoT was first coined by Kevin Ashton in 1999. Internet of Things is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with Unique Identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction."
When things like household appliances are connected to a network, they can work together in cooperation to provide the ideal service as a whole, not as a collection of independently working devices.
The next paragraph notes that the Internet is one of the most powerful creations by human beings, and that with the advent of IoT, the Internet becomes more favorable to a smart life in every aspect.
So, yes, the Internet can benefit IoT and IoT can benefit the Internet, but much of IoT is just about interrelated computing devices communicating with each other on some network.
On a final note: IoT ≠ AI. Some IoT solutions utilize AI or machine learning (ML), but not all. Additionally, IoT does not equal Big Data, business analytics, decision support, or operations management. IoT is about connected things that do not receive their data through direct user input. Simple as that.
What you do with those connections will be driven by your value proposition. Some implement IoT just to do over-the-air firmware updates. Others use it to feed data into decision support systems. Others use it to control machinery, valves, temperatures, and more.
How you use IoT is different from what IoT is. IoT is just about connecting the things. Once connected, the options for use are tremendous.
Many other verticals and use cases for IoT can exist. The IoT is being utilized in different industries and for various use cases. The goals of using IoT also vary between those different utilizers. Goals can range from optimizing operations, reducing risk, enhancing a service, developing a new product, and enhancing outputs to improving personal care. Those goals still apply to enterprises and individuals alike. For example, in optimizing operations, an individual might leverage his home assistant platform to plan his day efficiently. The same goal could be set for an industrial factory with the aim of reducing downtime and saving costs. These are the most general goal categories, but we will be discussing vertical-specific use-cases and their goals in the next sections.
We don't mean to add to the uncertainty of IoT given its broad definition. Many of the following verticals and their use-cases overlap in multiple ways. For example, a use-case for tracking school students in school buses could be classified under education and under transportation at the same time and the tracking technologies used could be the same or similar to those used to track industrial assets. In other cases, a specific vertical or industry category of use-cases can be discussed in a more specialized approach. For example, IoT in transportation can be split across fleet management, whose use-cases overlap with some of Industrial IoT, mass transit systems, and the connected car use-cases. The connected car can use a whole dedicated section of its own!
"More and more businesses are adopting and accepting industrial automation on a large scale, with the market for industrial robots expected to reach 73.5 billion US dollars by 2023."
Having tackled the definition and history of IoT, we already had to show how IoT is one of the major building blocks for digital transformation and the fourth industrial revolution. Thus, it might be very natural to start off with the first business vertical that has employed IoT and sparked the expansion and advancement of IoT onto other verticals.
Two important groups exist in industries: Information Technology (IT) and Operations Technology (OT). OT had mainly dominated industrial environments with technology comprised of hardware and software tools to do physical monitoring and to output data including but not limited to metrics, uptime, real-time data monitoring, systems response, and system/environment safety. IT, on the other hand, is focused on technical services, data delivery, systems security and logical segmentation of the industrial setup. OT traditionally used unique hardware and software not used in enterprise networks. IT, within industry, has used the same hardware and software used in enterprise networks. IoT is changing much of this divide.
IoT has come a long way since it was a term coined to get attention to what then was an RFID integration into the supply chain. Advances in processing, memory, data storage, connectivity and more, have pushed OT and IT forward, but IoT has evolved as well to bridge OT and IT together.
IIoT utilizes ruggedized devices and systems that are usually constructed for operating in harsh environments. Being in IIoT, specifically with manufacturing and supply chain, means that ruggedizing IoT comes with the main goal of running systems with high mean time between failures (MTBF). IoT can often be found in the industrial verticals to communicate over a different range of wired and wireless protocols. It is implemented with varying application layer communication protocols. Real-time, low-latency communications are essential for real-time decisions. Another characteristic of IIoT is that it's often deployed in siloed setups without connectivity to the Internet and with the core objective of specific operations such as running critical machine control feedback and monitoring.
The IoT category covers many industries including automotive, bottling, food processing, oil, gas, manufacturing, mining, paper, petrochemical, pharmaceutical, power generation, power distribution, pulp, transportation, water treatment, and more. Many will separate oil & gas from industrial networks, and there are unique attributes of oil & gas, but for our purposes, we'll consider them together as both require robust hardware, stable software, and secure communications to ensure continued operations. However, they are different enough that we'll also consider them separate as we look at oil & gas in the next section.
This diversity means that there isn't a single go-to IoT solution for IIoT but a variety of solutions that could utilize different sensors, communications protocols, services, and applications. Each could require its own design approach, security standards, certification program, regulatory restrictions, health & safety standards, performance monitoring, and construction standards. In fact, you will come to see that some protocols are more common in manufacturing while others are more common in oil & gas or power distribution.
We can go deeper to classify IIoT in more specific categories, each with its own characteristics and requirements.
Manufacturing has driven a great amount of the industrial IoT use-cases. Most of us can envision robotic systems, assembly lines, manufacturing plants, and even manufacturing design and operation engineers working on their enhanced production plans. Different types of connected sensors and actuators are driving these systems with the general objectives of controlling costs and improving efficiency. Worker safety is also of increasing importance.
According to an IoT study run by the McKinsey Global Institute in 2015, the total economic impact of IoT in industrial worksites and factories in 2025 will be $1.3T-$4.6T. The top areas they identified include operations optimization, predictive maintenance, inventory optimization, and health and safety.
Economic changes have shifted industry models due to global competition, changing the primary focus toward innovation and improved business models. This change is primarily being led by digitization and IoT. With the production system no longer a competitive advantage, IoT data is being used to boost manufacturing performance.
Connectivity, speed, accessibility, and anchoring are the four main areas IIoT is expected to enhance production systems, according to a more recent study by McKinsey. The following descriptions are provided:
Oil & gas are considered some of the most critical resources in the world, not only to keep it running as a source of energy but also feeding back into the processing of basic manufacturing materials. The manufacturing environments require energy that, today, is predominantly provided through the oil & gas industry.
The main focus of utilizing IoT in oil & gas companies is to control production (reduce cost, improve production efficiency/speed, better utilize facilities) while maintaining improved working conditions pertaining to health and safety in dangerous environments.
When we discuss oil & gas as an industry, it's not just limited to specific areas like oil fields or processing factories only. This usually covers all the locations in the value chain through which oil and gas are transformed from primary resources to final products. Oil & gas industrial locations extend from rigs for exploration and resource extraction, to offshore shipping, to factories for processing/refining, to pipelines for distribution, and to other distribution/selling networks, including transportation.
Within the industry, the terms upstream, midstream, and downstream refer to different points from the natural source to the consumer (industrial or individual). Upstream is the original source and includes exploration and production of oil and gas. Midstream includes both the delivery pipeline from the sources and the refinement of the resources. Downstream includes consumer and wholesale sales and distribution to the eventual end use cases.
IIoT in oil & gas use-cases include:
For all of the above different characteristics and challenges of oil & gas locations, IoT is used to ensure:
Some IoT systems are deployed for use-cases in the context of the human body as the main environment where the applications fall into two broad-categories that adjust the human habits. These are mainly focused on enhancing productivity and improving health. The contrast here with other IoT applications is that we are not turning off an oil & gas valve, but information and reminders are provided so insightful decisions could be made about exercising, the general state of health (sleep, heartbeats, movement), exercising and other productivity and health-related habits. Additionally, medicine may be administered automatically, implementing actuation.
This is accomplished through single devices in some cases, which does not technically implement a Wireless Body Area Network (WBAN). In other cases, multiple sensors and actuators are ingested, implanted, or attached that interconnect with each other and/or a central "hub" that communicates with the health network. The latter case is a WBAN and, effectively, makes the human a connected object.
Wearables cover any devices that can come in touch with the human body (strapped or attached) that collects the individual's state, communicates information and alerts and triggers, or performs other functions on or around the individual. But IoT for humans doesn't stop at wearables.
IoT devices for human health can include implantables, ingestibles, and injectables, such as nanobots that can clear arteries or help detect early-stage cancer. Once these devices have cleared clinical trials and are properly certified/approved, we will see big adoption with significant impact on human health.
The largest source of value would be using IoT devices to monitor health and treat illness ($170 billion to $1.1 trillion per year). The value would arise from improving quality of life and extending healthy life spans for patients with chronic illnesses and reducing cost of treatment.
The second-largest source of value for humans would be improved wellness—using data generated by fitness bands or other wearables to track and modify diet and exercise routines.
The use-cases for human IoT devices would include:
Other wearable devices are now extending to different structures of sensors such as those embedded into glasses, clothes, wearable goggles for virtual reality (VR)/augmented reality (AR), and advancement in the sensors' hardware capabilities are enabling those wearables to achieve attractive use-cases such as in education, entertainment, manufacturing, and also health. Apple has already rolled out its electrocardiogram (ECG or EKG) in the beginning of 2019, and it has already managed to effectively diagnose different users with positive symptoms so they can get preventive treatment.
The healthcare industry is often seen as a leader in the loT space (sometimes called Health loT (HIoT) or loT Health (loTH) because of the use of wireless technology in health monitoring solutions. Twenty years ago, some hospitals were implementing mobile health carts. Today we have wearables, ingestibles, injectables, and more used on remote and local health monitoring.
The use-case areas include monitoring and managing illness and improving wellness including preventive care. With added focus on optimizing spending and increasing profits, more attention is being given to increasing situational awareness surrounding the patient, hospital operations, asset management, predictive maintenance, and positive postoperative outcomes. In brief, these use-cases could be highlighted as39:
Digitization has shifted the traditional brick-and-mortar scene of retail and shopping into different frontiers. On-line retailers are managing to deliver more enhanced "digital experiences" for shoppers today, and the traditional retailers are trying to catch-up with that. Since shoppers are seeking out different experiences, not just buying the products, retailers are seeking new ways to engage customers while transforming operations and seeking out the advantages of digitization in the whole sales cycle, not just for the shoppers themselves.
McKinsey estimated back in 2015 that the total economic impact of loT in retail will be $410B-$1.2T by 2025, making it one of the top spaces for loT solutions. 40 In 2021, the McKinsey report estimated that loT in general could enable from $5.5 trillion to $12.6 trillion in value globally*. They also updated their retail projections to be $650 billion to $1.15 trillion by 2030, which is a bit of slowdown in this sector, but still tremendous growth. Remember, this is the economic value of loT and not the number of IoT devices.
IoT in retail is helping in different use-cases including mobile engagement for bringing the online shopping experience into the brick-and-mortar setup with features including:
The rate of technological development in education is significantly slower compared to the wearables or consumer product industry because funding fluctuates. The low-end of this fluctuation is evident in schools that try to utilize their infrastructure beyond its capitalization and even beyond end-of-support periods. A primary objective of loT in education is to cover the technological challenges involved in learning and collaboration solutions. Long IT infrastructure lifecycles are the result of different solutions utilizing different protocols and software that, in turn, challenge the IT supporting any school or university. IoT must break the challenges posed for data access, device management, networking and security via interoperability and simplification instead of adding complexity. IoT in Education and for learning and collaboration objectives can:
With the advent of the 3rd and 4th industrial revolutions, we have strived to enhance the way we communicate and interact. A more efficient way to travel around with better means of transportation has also revolutionized communication making in-person communication more cost-effective and practical than ever. The exponential growth and explosion of the means of transportation have brought along with them many challenges around safety, traffic management, efficiency, as well as environmental challenges. IoT is already helping overcome a few of these challenges through traffic optimization, trip conditions, vehicle state, driver experience, and infrastructure management. Whether it's on a smaller scale of a connected vehicle/car or a massive one like inter-city or inter-country railroads, loT brings the advantages of data collection, sharing it across distances of any transportation solution scale, and provides the tools to process that data and as a result optimizes the underlying transportation system. The car reflects the advancement of industrial revolutions on a human invention. It has shifted from mechanical to electrical and over the past few decades. A car by itself is an island of data collected from different systems and sensors within the car and shown to the user on its dashboard and digital screen(s). Wireless communications can enable the car to integrate within the loT transport system to enable it to share valuable information with the driver, other vehicles, service providers (car dealership and other third-party insurance, safety and security systems), and the transportation infrastructure. Some of the use-cases for connected vehicles with IoT are:
loT enables offices, warehouses, arenas, and other buildings to become smart buildings. Underlying IT infrastructure can be integrated and used to connect loT devices for different work functions including physical access, security, printing, scanning, space management and environmental monitoring and management, among others. The objective from making the building smart is to enhance the experience of the end-users (employees and visitors) as well as the line-of-business owners and decision-makers. Enhancing the experience is not just about providing more excellent interactions; it is also about increasing productivity while decreasing cost of resources. Use-cases for loT around the workplace can be:
With the demand for a big population for cultivated products, including livestock animals, technology is also a major player in the enhancement, monitoring, and sustainability of many agricultural practices. Out of all the natural resources and efforts that go into agriculture, water is the most valuable one, and allocating water for growing livestock and for irrigating crops properly guarantees that water usage is optimized. IoT can help in this area as well as in other conditions including:
If we combine all the above use-cases from different verticals, we could create a larger city-wide ecosystem. Many governing authorities have had the vision of integrating IoT technologies into districts, towns, municipalities, cities and even countries with different goals towards achieving more efficient metropolitan management, economic development, sustainability, innovation, and citizen engagement.
Smart cities can combine use-cases from different verticals to achieve city-wide managed, monitored, and facilitated IoT use-cases such as:
These are just a few of the use-cases that might borrow from different verticals to extend to other areas, all with the objective of enhancing the quality of living for the people while making cities more sustainable.
The core of loT is enabling devices, sensors, and general things with connectivity to communicate relevant data from and to the connected devices, systems, and platforms. At higher layers, it is about using this data for effective decisions, processing, and operations. To effect the benefits of loT, a good architecture or model is key. This section will introduce the concepts of loT models and architectures and the roles they play in successful loT implementations.
Connectivity, commonly wireless, including Wi-Fi and other technologies, is a main building block of the loT stack. But there are other components too, without which IoT would not be achieved. Different sources from literature, research, and vendors consortiums in loT provide different depictions of loT architecture reference models that speak to the importance of higher layers to loT success. For example, the IoT World Forum (IoTWF), defined the reference model shown in Figure 9.4 in 2014.
We can look at a different representation from the IoT-A that defines the starting key building blocks which create the foundations of loT. Based on an experimental paradigm, it combines top-down reasoning about architectural and design principles and guidelines with simulation and prototyping that result in specific technical consequences based on architectural design choices. It results in a reference model composed of sub-models that set the scope for the loT design space and that address the technical consequence-based guidelines and perspectives. Figure 9.5 shows what this model looks like.
Different reference models could be considered from the Purdue Model for Control Hierarchy (which we cover more in the CWIIP Study and Reference Guide), the Industrial Internet Reference Architecture (IRA) by the Industrial Internet Consortium (IC) (which we cover more in the CWIDP Study and Reference Guide) as well as others. While different models vary with their depiction from a high-level approach to a more detailed and inter-linked one, as in the above IoT-A model, the main purpose of setting a reference model is to tackle how to establish common grounds with focused development in an exponentially growing space. Technically, this would mean setting models to tackle scalability, interoperability & integration, communication, governance and standards compatibility and more.
While different entities or sources can give us different reference models, they all include the basic components, requirements, and answers to the challenges that the IoT needs to overcome in order to operate. To simplify the representation, a 3-layer model will be used to represent the core loT functional stack. Figure 9.6 illustrates this model. These components, along with wireless communication, can be listed as:
As a Certified Wireless loT Solutions Administrator has to be able to implement, administer, and troubleshoot technologies that heavily rely on wireless, the Network layer of loT is our most significant focus, where we can talk about the underlying wireless communication technologies that a CWISA must know. This will be discussed next, as we dive deeper into the loT architecture requirements, including hardware, connectivity, security, and applications.
The most familiar component, responsible for the sensing/perception layer of IoT, is the loT hardware itself. Professionals and consumers alike are fairly knowledgeable about loT hardware, especially in non-industrial use-cases. With the expected explosive growth of IoT devices of up to 50 billion devices by 2025 cited by different sources, we should note the hardware characteristics and requirements in IoT.
The basic components of a wireless loT device include a communication module for network transmissions, a microcontroller for local processing, sensors/actuators, and a power source. These are illustrated in Figure 9.7.
The communication module enables a smart device to communicate over a wired, or more commonly, wireless connection. When using wireless, this module will include a radio with capabilities in the desired frequency band and the required supported protocols.
The microcontroller defines the function and behavior of the smart device. This usually contains two different types of memory, the Read-only Memory (ROM) and Random-Access Memory (RAM). The ROM stores the software code that defines the functions that the device carries out while the RAM is utilized for temporary variables and data that is processed by the software. IoT devices can be categorized across different classes to help differentiate between them, and this is usually done based on memory and processing capabilities.
The sensors/actuators give the smart object an interface to collect data and interact with the systems. It is this component that is the source of knowledge to allow for the intelligence that makes the thing smart.
Finally, the power source provides power for the electric/electronic operation of the smart object. Different IoT devices require different forms of input power. The most common form of power source is batteries, usually Lithium cell, but there are other sources such:
Most consumer IoT utilizes rechargeable batteries or plug into a limitless source of power. A smart bulb plugs into AC power fittings, whereas a fitness tracker employs button cell Lithium batteries and is usually recharged every few days.
Some of the most common IoT hardware platforms available on the market today serve multiple purposes. The platforms can be IoT gateways, SBCs, and development kits. These platforms are readily designed so IoT can be built for different purposes like research and education, industrial testing and prototyping, DIY projects and other general purposes. Some of the companies who provide these platforms are:
When selecting a platform, multiple factors can help in the decision. From a technical perspective, those could be:
From a business perspective, those could be:
It is also important to consider how you will update firmware or software in the IoT devices. Less-capable devices may require that you connect them to another system, such as a laptop, to update the firmware/software. More-capable systems may support Firmware over the Air (FoTA) updates. FoTA allows IoT devices and wireless sensors to be updated either manually or automatically through the radio interface. When selecting IoT devices, be sure to consider support for this feature.
The nature of IoT requires minimal hardware and power consumption with optimized communications. This poses some trade-offs for the hardware and software, and at the same time raises the risk of deploying a less secure network of smart objects.
Because of the small form factor of hardware and the limited capabilities it enforces, there could be a limitation on computational power and memory. In addition, to save on energy consumption would also mean to rely on lightweight operations and minimized communication between IoT nodes.
Security at all layers must be enforced — from the hardware layer to the application layer — from the Perception to the Application layer in IoT terms. From physical access at the IoT hardware to insecure web or app-interfaces is a big range and a very easy attack surface for an attacker if not properly secured.
Key concerns include:
Without these, we risk:
Other examples of security threats:
Threats could be related to the physical nature of smart IoT devices and their deployment locations, requiring physical accessibility, having small form factors, provisioning methods, connectivity setup, and wireless communication.
The convergence of terminals and servers that are connected onto a single global network is what we know as the Internet. In the same manner, the convergence of the devices that have become enabled with data and connectivity to integrate within the web of all devices and platforms is how IoT is coming together. Since the Internet has paved the way for the most common protocols based on IP communications, convergence of IoT is also happening on an IP-based architecture in many cases. The readers of this book are assumed to be familiar with the OSI models at this point as a guiding principle for communications. To dissect the IoT communications into its different building blocks, or layers, and focus more on the wireless aspect in it, we can map it to the OSI model.
From an IoT perspective, we are most interested in how the technologies of wireless communications for IoT operate. TCP/IP is the base of Internet communication. Referring to an IP-based reference model of the Internet would help us simplify representation of the OSI model further as this is a more popular stack model. The central interest on the "Network" layer represented in the IoT model above would have to extend from the "Perception" layer, starting with hardware, sensors, and physical layer, and connecting it up to the "Application" layer. The IEEE breaks down different wireless technologies on the lower layers to clearly describe interoperability. In relation, we can look at how the IoT "Network" layer relates to the lower layers of the IEEE models of wireless technologies while its other layer components link them to the Physical and upper Network/Internet and Application layers.
Many wireless technologies are utilized in IoT. Each of these protocols has a stack with common features. At the lower layer, the PHY and MAC are standardized by neutral standards bodies (eg: IEEE and ITU) whereas the upper layers are maintained by an industry group. Being based on neutral standards means that these are readily available for developers to use, while the specific industry group protocols must be licensed or paid for to be accessed and used to certify operations and interoperability by developers.
In many cases, it may be essential to implement Virtual LANs (VLANs) on the Ethernet side of the network to segregate IoT or wireless sensor traffic from the rest of the network. In many cases, IoT and sensor devices need only to communicate directly with a cloud service. If this is the case, use VLANs to allow the IoT or sensor traffic to communicate with the Internet and prevent them from being used as a point of ingress to your network.
Wireless technologies discussed in this chapter are not necessarily IP-based. We are simply adopting the TCP/IP and OSI models of representing different communication layers to make the task of mapping different components of the several wireless technologies to a familiar reference. Most wireless personal area networks (WPANs) utilize protocols, like Bluetooth, Zigbee, and Z-Wave that don't inherently communicate over TCP/IP but are similar to a true TCP/IP protocol. However, adaptations of a few of these protocols that are based on IP do exist (eg: Zigbee-IP, 6LowWPAN on IP over Bluetooth).
Why do we have so many standards and protocols that different IoT devices and networks could use for communication? The reason is that there are different criteria that can lead to a selection of a specific communication standard. We have already discussed how IoT hardware can be constrained to specific form factor, power, chipset, and other defining capabilities. Wireless communication is one of these capabilities, and these factors also come into play when a specific communication technology is utilized which in turn affects the hardware itself. The most important factors to consider for wireless communications for IoT are:
Frequency:
The choice between licensed and unlicensed bands greatly affects the technology, complexity, and service guarantee of the IoT communications.
The unlicensed industrial, scientific, and medical (ISM) frequency bands are free to utilize, often regulated by different national and regional authorities and bodies to set device compliance when it comes to transmit power/gain, channel allocation, channel selection and hopping mechanism, duty cycles and dwell times. This makes it easy to deploy on the ISM bands but usually restricts the technology to a short or medium range wireless communication. At the same time, this does not guarantee a quality of service for the frequency space since it will be busy with interference from different standards and employed by many other deployments. Wi-Fi, Bluetooth, and 802.15.4 based protocols are some examples of wireless technologies that can utilize the unlicensed bands.
The licensed bands are usually regulated for exclusivity, but to use them, the license should be provisioned and paid for in order to enable operations. This adds to the overhead of any solution from a logistics and commercial perspective. Wi-Fi, WiMax, cellular and narrow-band IoT (NB-IoT) are some examples of the technologies that utilize these licensed bands.
Range:
The distance that the communication within the IoT components greatly affects the choice of the underlying wireless technology. This can scale on a short-range from a few meters, with technologies like Bluetooth, to medium-range technologies extending to 10s and 100s of meters such as 802.11 and 802.15.4, to an even longer range with 802.11 and cellular networks extending in kilometers of connectivity.
Power Consumption:
Regardless of the power source, there is a balance between mobility and a reliable power source. A continuous or perpetual power source means there is a wired powered source, such as AC, DC, or PoE power. While this provides a more significant benefit than a temporary power source such as batteries that need to be recharged or serviced, it ties down the mobility aspect of any IoT node wherever this flexibility is needed. When we look at battery-powered devices, this ties it back to consumption and connectivity constraints, since lower consumption technologies must be used.
Topology:
The communication topology also affects the choice of technology as well as the power source and consumption.
The above factors are considered for lower layer IoT wireless access protocols. The protocols that determine the majority of wireless communication technologies in IoT are:
Developed by Ericsson along with IBM, Intel, Nokia and Toshiba formed the Bluetooth SIG to develop the Bluetooth standard which was approved by the IEEE 802.15 committee in 2002 as a WPAN based on Bluetooth protocol dubbed 802.15.1-2002. In 2005, the IEEE amended the standard with additional improvements as 802.15.1-2005. After 2005, the Bluetooth SIG maintains the standard as well as the upper layers to serve different applications.
The lower layers of Bluetooth corresponding to the PHY and MAC layers are the radio, baseband, and link manager layers where:
Bluetooth has evolved over multiple versions with different enhancements out of which BLE, or Bluetooth Low Energy (LE), that came out with Bluetooth 4.0, was important for IoT since it used lower energy and could be used in a range of wireless sensors and IoT devices. The most recent version of Bluetooth as of the writing of this chapter is Bluetooth 5.1.
Operating Frequency:
Bluetooth operates in the 2.4 GHz (2.402 - 2.480 GHz) ISM band only. Bluetooth BR/EDR uses 79 frequency channels with 1MHz apart while Bluetooth LE uses 40 channels with 2 MHz spacing (3 advertising channels/37 data channels). Bluetooth uses Frequency-Hopping Spread Spectrum (FHSS) across these defined frequency channels as the channel hop plan.
Transceiver:
Bluetooth identifies different "classes" of Bluetooth transmitters. These classes dictate the limits on the transmit power. The transmit power for BLE ranges from 0.01mW to 100mW.
Why do we have so many standards and protocols that different IoT devices and networks could use for communication? The reason is that there are different criteria that can lead to a selection of a specific communication standard. We have already discussed how IoT hardware can be constrained to specific form factor, power, chipset, and other defining capabilities. Wireless communication is one of these capabilities, and these factors also come into play when a specific communication technology is utilized which in turn affects the hardware itself. The most important factors to consider for wireless communications for IoT are:
Frequency:
The choice between licensed and unlicensed bands greatly affects the technology, complexity, and service guarantee of the IoT communications.
The unlicensed industrial, scientific, and medical (ISM) frequency bands are free to utilize, often regulated by different national and regional authorities and bodies to set device compliance when it comes to transmit power/gain, channel allocation, channel selection and hopping mechanism, duty cycles and dwell times. This makes it easy to deploy on the ISM bands but usually restricts the technology to a short or medium range wireless communication. At the same time, this does not guarantee a quality of service for the frequency space since it will be busy with interference from different standards and employed by many other deployments. Wi-Fi, Bluetooth, and 802.15.4 based protocols are some examples of wireless technologies that can utilize the unlicensed bands.
The licensed bands are usually regulated for exclusivity, but to use them, the license should be provisioned and paid for in order to enable operations. This adds to the overhead of any solution from a logistics and commercial perspective. Wi-Fi, WiMax, cellular and narrow-band IoT (NB-IoT) are some examples of the technologies that utilize these licensed bands.
Range:
The distance that the communication within the IoT components greatly affects the choice of the underlying wireless technology. This can scale on a short-range from a few meters, with technologies like Bluetooth, to medium-range technologies extending to 10s and 100s of meters such as 802.11 and 802.15.4, to an even longer range with 802.11 and cellular networks extending in kilometers of connectivity.
Power Consumption:
Regardless of the power source, there is a balance between mobility and a reliable power source. A continuous or perpetual power source means there is a wired powered source, such as AC, DC, or PoE power. While this provides a more significant benefit than a temporary power source such as batteries that need to be recharged or serviced, it ties down the mobility aspect of any IoT node wherever this flexibility is needed. When we look at battery-powered devices, this ties it back to consumption and connectivity constraints, since lower consumption technologies must be used.
Topology:
The communication topology also affects the choice of technology as well as the power source and consumption.
The above factors are considered for lower layer IoT wireless access protocols. The protocols that determine the majority of wireless communication technologies in IoT are:
We will next highlight technical details about the lower protocols and the industry standards that they support. Some of the factors for choosing a specific protocol or technologies outlined before will also be highlighted for each of these different standards.
Developed by Ericsson along with IBM, Intel, Nokia and Toshiba formed the Bluetooth SIG to develop the Bluetooth standard which was approved by the IEEE 802.15 committee in 2002 as a WPAN based on Bluetooth protocol dubbed 802.15.1-2002. In 2005, the IEEE amended the standard with additional improvements as 802.15.1-2005. After 2005, the Bluetooth SIG maintains the standard as well as the upper layers to serve different applications.
The lower layers of Bluetooth corresponding to the PHY and MAC layers are the radio, baseband, and link manager layers where:
Bluetooth has evolved over multiple versions with different enhancements out of which BLE, or Bluetooth Low Energy (LE), that came out with Bluetooth 4.0, was important for IoT since it used lower energy and could be used in a range of wireless sensors and IoT devices. The most recent version of Bluetooth as of the writing of this chapter is Bluetooth 5.1.
Operating Frequency:
Bluetooth operates in the 2.4 GHz (2.402 - 2.480 GHz) ISM band only. Bluetooth BR/EDR uses 79 frequency channels with 1MHz apart while Bluetooth LE uses 40 channels with 2 MHz spacing (3 advertising channels/37 data channels). Bluetooth uses Frequency-Hopping Spread Spectrum (FHSS) across these defined frequency channels as the channel hop plan.
Transceiver:
Bluetooth identifies different "classes" of Bluetooth transmitters. These classes dictate the limits on the transmit power. The transmit power for BLE ranges from 0.01mW to 100mW.
The Bluetooth standard specifies -70 dBm as the minimum reference receiver sensitivity. The receiver sensitivity is defined as the signal strength at the receiver. At this minimum, the receiver should be able to achieve a Bit Error Rate (BER) of 0.1%, or one error in 1000 bits.
Range:
Specified as 1 to 100 meters, depending on the Bluetooth device class. This, however, could be subject to variation depending on the signal strength, receiver sensitivity, noise, and whether the proper error/control functions and other enhancements or versions in the protocol are being used. For example, the LE Coded enhancement introduced in Bluetooth 5 adds forward error correction (FEC) that decreases the rate but makes communication less susceptible to errors. This can extend the range up to 4 times with tests showing reception of Bluetooth notifications over a mobile phone at a distance of 350 meters.
Originally created based on the lower layers of Z-wave, ITU G.9959 is not a wireless IoT technology by itself, but a standard to guide the conformation of technologies. The International Telecommunication Union created the standard to allow inoperability with different hardware of "short-range narrow-band digital radio communication transceivers."
Operating Frequency:
ITU G.9959 operates on sub-1GHz bands. Different frequency ranges are allocated in different regulatory domains. This disallows using devices across different regions freely and mandates that specific ITU G.9959 be created specifically for the different regions. ITU G.9959 can operate in the 900MHz ISM band in the Americas and Australia, while it operates on the 800MHz Short Range Devices (SRD) band in Europe. The advantage of operating in these lower frequency bands is avoiding the congested 2.4GHz ISM bands and better propagation features.
Transceiver:
In the calculations of a link budget for proper ITU G.9959 operation, the standard allows for the below data rates at the nominal power level with the minimum receiving sensitivity as long as it is within regulatory bounds.
Range:
Depending on the link budget calculations with the given noise floor and target data rates, technologies that utilize ITU G.9959, like Z-Wave, can reach a range of 30 meters indoors and extends to 100 meters in outdoor deployments.
Similar to ITU G.9959, the IEEE 802.15.4 is a standard unto which many communication protocols used in IoT are based. Throughout its lifetime, the standard has been updated with the majority of additions adding to the channel planning, modulation schemes, new waveforms, and other features. The standard makes use of different frequency band allocations that make up its specified 27 channels, with a trade-off for each of these bands discussed below.
Operating Frequency:
IEEE 802.15.4 can operate in the 2.4 GHz ISM band as well as the sub-1 GHz unlicensed bands. Operation in the 2.4GHz band makes it possible for vendors to manufacture IoT devices with transceivers that can be globally used. This, however, has the drawback of operating in a highly congested band, but the protocol allows choosing between up to the 16 channels. The sub-1 GHz operation gives the same advantages as the ITU G.9959, less congested channels, but restricted by the frequency bands specified in different geographies.
Transceiver:
Receiver sensitivitys depends on the modulation scheme and operation frequency band and are listed in the following table:
Band | Modulation | Receiver Sensitivity | Data rate |
---|---|---|---|
2.4GHz | OQPSK | -85 dBm | 250 kbps |
Sub-1GHz | BPSK | -92 dBm | 20 kbps |
Sub-1GHz | ASK | -85 dBm | 250 kbps |
Sub-1GHz | OQPSK | -85 dBm | 100 kbps |
Range:
Depending on the link budget calculations with the given noise floor and target data rates, technologies that utilize 802.15.4, like Zigbee, can propagate up to 10–100 meters.
The role of the IEEE 802.11 standards is to replace the wired cables 802.3 uses to create a local area network (LAN). It was first established in 1997 and has since undergone many revisions and enhancements. 802.11 is referred to as Wi-Fi, which is the certification maintained by the Wi-Fi Alliance (WFA). Although used interchangeably, Wi-Fi does not utilize all of the IEEE 802.11 specifications defined by its committees.
For example, on the PHY layer, 802.11 can operate over different frequencies including 900MHz, 2.4GHz, 3.6GHz, 4.9GHz, 5GHz, 5.9GHz, and 60GHz bands. The majority of Wi-Fi that is certified by the WFA operates on the 2.4, 5GHz, and 60GHz bands. As outlined earlier, 802.11 operates on the lower PHY and MAC layers of the communication stack. New updates that every new 802.11 standard/revision introduces bring changes to how the PHY and MAC layers are adopted with new frequency spreading functions, modulation and coding schemes, rates, range, power consumption, topology capabilities and other factors that decide the best 802.11 standard for each different application.
More 802.11 and Wi-Fi details are covered in the CWNP Certified Wireless Network Administrator Official Study Guide.
With hundreds of companies backing up the standard forming the Connectivity Standards Alliance (CSA), this technology has been driven for interoperability between different vendors as a major solution to interconnect IoT devices. Although many people refer to Zigbee and 802.15.4 as the same, one should always remember that Zigbee is the standard that makes use of a subset of the 802.15.4 specifications. This is analogous to the relation between the Wi-Fi Alliance certifying Wi-Fi based on the 802.11 protocols. The certification guarantees interoperability between devices manufactured by different vendors. However, the CSA goes well beyond what the Wi-Fi Alliance does in that they define the entire protocol stack.
Zigbee is tailored for sensors and devices with low bandwidth and power needs. You can usually find these devices in home automation and smart energy devices and systems.
While the lower layers of Zigbee are built on top of 802.15.4, Zigbee specifies the network and security layer and application support layers which ties the lower PHY and MAC to networking to the upper application profile layers. These application profiles are pre-defined based on the specific industry use-cases, but vendors can create their customized profiles as well.
While the Zigbee Alliance has guaranteed interoperability between different vendor Zigbee devices, this hasn't provided the same with other IoT solutions. This gave birth to the Zigbee IP to support open standards.
Zigbee IP was built on the same lower layers as 802.15.4 on the PHY and MAC layers, but also incorporates IP, TCP/UDP and other IETF standards on the upper network and transport layers. The layers specific to Zigbee are only at the top of the protocol stack for applications.
To provide integration and interoperation with any other 802.15.4-based IoT network based on open and current standards coming from the IETF (Internet Engineering Task Force), Zigbee IP supports 6LoWPAN (IPv6 over Low-Power Wireless Personal Area Networks) as an adaptation layer for fragmentation and header compression schemes.
"At the network layer, all Zigbee IP nodes support IPv6, ICMPv6, and 6LoWPAN Neighbor Discovery (ND), and utilize RPL (IPv6 Routing Protocol for Low-Power and Lossy Networks) for the routing of packets across the mesh network. Both TCP and UDP are also supported, to provide both connection-oriented and connectionless service."
All of these are aimed at providing lower bandwidth and power and more cost-effective communications in IoT. Zigbee IP was initially explicitly designed by the CSA for the Smart Energy Profile 2.0 (SE 2.0) specification for smart metering and residential energy management systems, but it can be utilized for any other application that utilizes it as a standards-based IoT stack.
Nest Labs, a subsidiary of Alphabet, the mother company of Google, has led the development of Thread with the aim of creating secure, IP-based, low-power mesh networking protocol built on top of the 802.15.4 standard. The Thread Group was formed as an alliance between Nest and other industry partners to promote this protocol.
The Thread protocol stack is analogous to Zigbee IP as it is built on top of 802.15.4 and it follows the TCP/IP model. UDP fulfills the duties of the transport layer while the Internet layer is represented by IPv6 and 6LoWPAN.
For extensive information about Thread, you can download the specification document here: https://www.threadgroup.org/ThreadSpec
WirelessHART originated from the Highway Addressable Remote Transducer (HART) communication protocol aimed at industrial process measurement and control. It is a self-organizing wireless mesh network communication protocol that uses the IEEE 802.15.4 radio standard.
WirelessHART has been designed by industry experts with security in mind. It is an open, multivendor, interoperable protocol that is secure out of the box with security always on and no user configuration needed. This makes WirelessHART a simple, easy to use protocol designed primarily for industrial and other applications. You will find WirelessHART deployed mostly in Industrial IoT settings, delivering the processes data through a wireless field device network (WEN) is also comprised of both OT and IT, and leveraging HART to even communicate both digital and analog data readings.
Since it's a wireless extension to HART, this made the deployment of networks that utilize WirelessHART very easy and allowed deployers to extend the reach of the industrial use-cases to places that were either hard or too expensive to reach with wired equipment.
ZenSys created Z-Wave to support home automation products like lighting controls, thermostats, and garage door openers. After marketing Z-Wave in 2003, other companies joined ZenSys to form the Z-Wave Alliance.
The Z-Wave PHY layer focused on low power consumption, battery-saving, and propagation in indoor environments. Its MAC layer controlled the framing and carrier sensing functions. These lower layers got standardized in 2012 by the ITU as the G.9959 standard.
The specifications for Z-Wave upper layers of operation for Transfer, Routing, and Application are provided by the Z-Wave Alliance.
802.11/Wi-Fi protocols face some of the challenges for wireless communication in IoT because they might require more power, whether on being able to connect many nodes, or to achieve higher penetration, thus also being a challenge for battery-operated nodes.
IEEE 802.11ah was published in 2017 as "industrial Wi-Fi" access technology to answer those challenges. It's based on the stack with the PHY layer adjusted to operate on the sub-1 GHz unlicensed frequency bands. Meanwhile, the MAC layer is also optimized to support the new PHY specifications while supporting many endpoints (up to 8192/AP).
Region/Country | Frequency Band |
---|---|
Europe & Middle East | 863–868 MHz |
China | 314–316 MHz, 430–434 MHz, 470–510 MHz, 779–787 MHz |
Japan | 916.5–927.5 MHz |
North America & Other Asia-Pacific Regions | 902–928 MHz |
802.11ah is aimed at lower data rates, so while it does utilize the OFDM like other 802.11 protocols, it can easily reach further because of the lower rate requirements. The MAC is optimized with a shorter header. It also supports power saving through different control and contention mechanisms. Some of those are like sectorization, where areas are partitioned in order to limit collisions, null-data packets that make management and control frames exchange more efficient, restricted access window (RAW) for reducing collisions, and target wake time to reduce power and also collisions.
As a result, 802.11ah offers a longer range than "traditional Wi-Fi" and provides excellent support for low-power devices at high numbers that need to send smaller bursts of data at lower speeds.
The Wi-Fi Alliance branded 802.11ah as Wi-Fi HaLow (hey-low) certification program to ensure interoperability between different vendors implementing this standard.
With the objectives of meeting the IoT requirements, decreasing throughput and power consumption, while at the same time also decreasing the complexity and cost of equipment that relies on cellular technologies, the 3rd Generation Partnership Project (3GPP) set off to a new category of LTE devices under LTE-M 3GPP Work Item.
LTE-M went through different iterations as well as proposals from different vendors and companies, including Ericsson, Nokia, Huawei, Alcatel Lucent, Qualcomm, Sigfox as well as other standardization bodies, that were finally consolidated into a single Narrowband-IoT (NB-IoT) category of devices supporting Low Power Wide Area (LPWA) IoT.
"NB-IoT operates in half-duplex frequency-division duplexing (FDD) mode with a maximum data rate uplink of 60 kbps and downlink of 30 kbps. NB-IoT is defined with a link budget of 164 dB with the high link budget that should cater for better signal penetration in buildings while achieving battery life requirements."
Compared to all the previous technologies and certifications, NB-IoT is considered as a long-range IoT communication technology.
"The LoRaWAN specification is a Low Power, Wide Area (LPWA) networking protocol designed to wirelessly connect battery operated 'things' to the Internet in regional, national or global networks, and targets key Internet of Things requirements such as bi-directional communication, end-to-end security, mobility and localization services.⁵⁶"
LoRaWAN is used to define the MAC and Data-Link layers of the OSI model, as maintained by the LoRa Alliance. It is distinct from the underlying LoRa modulation, which defines the physical (PHY) layer.
Although LoRa is the primary modulation method, Frequency-Shift Keying (FSK) is sometimes used as an alternative. LoRa operates in the sub-GHz ISM bands (Industrial, Scientific, and Medical) across different frequency ranges, depending on regulatory domains.
By combining frequency bands, spreading factors, and channel bandwidths (125, 250, or 500 kHz), LoRaWAN can achieve data rates up to 21,900 bps.
Like NB-IoT, LoRaWAN is considered a long-range IoT technology. Sigfox is another example in this category and also offers global coverage.
All devices connected at the network edge in IoT setups use underlying wireless technologies. These edge devices (also called end devices) are either Connected Objects (COs) themselves or are physically connected to things to make them connected.
In IoT communications, data flows in various ways. A CO can send, receive, or relay data to other components in the system. For example, a node might tell another node or hub about a command received from a user (like adjusting a thermostat), or forward an alert triggered by detecting hazardous fumes.
Users can interact with these nodes by retrieving sensor data, inputting commands, or receiving alerts through apps, sound, or light indicators.
These interactions can be broken down into these bi-directional data flows:
These flows can be combined, such as:
In the context of wireless technologies, CO-to-CO is especially important as it represents lateral communication in the IoT stack, in contrast to vertical communication through stack layers.
Many IoT technologies support peer-to-peer, star, or mesh topologies. For example, Zigbee devices require a Full Function Device (FFD) to act as a coordinator in mesh networks. A hub is also needed when integrating Zigbee with Wi-Fi in smart home environments.
Many terms are floating around the world of connected devices—wearables, IoT end devices, M2M (machine-to-machine), WSN (wireless sensor networks), motes, badges, and more. Let’s focus on three: Wearables, IoT devices, and M2M.
Are all wearables IoT devices? Is M2M equal to IoT and vice versa?
It’s tempting to say that M2M and wearables are subsets of IoT, but it’s not always true. In its simplest form, IoT requires external connectivity—not necessarily to the Internet, but outside the local system.
M2M: Machines communicating with machines. If no external system collects or controls the data, it’s just M2M—not IoT. M2M existed long before IoT, and some systems remain purely M2M.
Wearables: From pedometers to smartwatches. If they only communicate locally with the wearer and don’t connect to a network, they aren’t IoT. Like M2M, many wearables predate IoT.
So, is a device IoT? Generally, if it connects to an external system that monitors or controls it, directly or indirectly, it’s IoT.
One book defines it like this:
"IoT could be defined as the interconnection of devices with embedded sensing, actuating, and communication capabilities. Data in IoT are collected, processed, coordinated, and communicated through embedded electronics, firmware, and communication technologies, protocols, and platforms."
Interestingly, the same book claims that all IoT devices use IP—especially IPv6. But there’s debate. Must IoT use IP?
My take: No. IoT doesn’t require IP, just some form of network connectivity to a central monitoring or control system. That said, most IoT data eventually travels via IP.
So while IP is nearly ubiquitous at some layer of the ecosystem, edge devices themselves don’t always use IP. And yet, those setups are still considered IoT.
Defining IoT involves examining how a device uses networks, what data it generates, and how it communicates.
Thankfully, CWNP doesn’t test on definitions—only on technical foundations. So don’t expect a question like:
“Does IoT require IP?”
Maybe someday, but for now? Nope.
— Tom
In this chapter, you learned about the details of IoT and brought together many of the foundational concepts first introduced in Chapter 1.
While the CWISA exam is not focused exclusively on IoT, it is clear that more and more devices today are falling under the IoT category. Therefore, having a solid understanding of these concepts is critical for any wireless solutions administrator.
You explored:
Many of these subjects will be revisited and expanded in greater detail in the chapters to come.
Understanding IoT is no longer optional—it's essential.
The focus of this chapter is on the general description of various network types and how they are used in different vertical markets. First, the network types will be described and then various vertical markets with their common challenges will be explored. These network types include WBANs, WPANs, WLANs, and WWANs.
A Wireless Body Area Network (WBAN) is sometimes referred to as a Body Area Network (BAN) or Body Sensor Network (BSN) and includes Medical Body Area Network (MBAN) within its definition. These are the smallest of the networks we will focus on for this certification, except a brief note about Near Field Communication (NFC), the 10-centimeter wireless solution.
These wireless networks include devices within the body (implantable, injectable, and ingestible) and wearable electronics. WBAN devices usually communicate to a companion device like a phone, tablet, or laptop to provide a user interface or report into server-based systems on the LAN or Internet.
In May 2012, the FCC adopted a proposal to allocate 2360–2400 MHz (40 MHz of spectrum) for MBAN devices. The purpose of the approval was to provide flexibility for MBAN devices to measure, record, and transmit physiological or other patient information to the applications that process this data.
Devices that operate in the 2360–2390 MHz range must be within a healthcare facility (indoors) and must send specific data to a frequency coordinator (MedRadio programmer/control transmitter) if the facility qualifies under Section 95.1203 and intends to operate MBAN devices. If these devices lose communication with the frequency coordinator, they must shut down their radios.
Operations in the 2390–2400 MHz band are not subject to registration or coordination and may be used in all areas including residential. These devices may be utilized outside of medical facilities and wherever the individuals go, which is helpful for other use cases, like athletes.
Wearable and implant technologies usually operate on other frequencies. RFID technologies are commonly used for access and identification purposes and can be implanted into pets or humans. These RFID chips usually operate within the ISM bands.
Wearable technologies (e.g., FitBit, AppleWatch) are also being used by owners and medical teams to track the movement, health, and wellbeing of users. These items often utilize Bluetooth or other protocols to communicate with a companion device.
WBANs cover spaces of about 1 meter.
A Wireless Personal Area Network (WPAN) provides hands-free connectivity and communication within a confined range and with limited throughput capacity. WPANs are suitable for small-scale mesh-type wireless networks, such as those implemented with Zigbee technology.
RFID systems are often categorized as WPAN technologies due to their short communication range. Bluetooth is another key WPAN technology with widespread adoption. Common devices using Bluetooth include mice, headsets, and speakers, which are in daily use across the globe.
Bluetooth operates in the 2.4 GHz ISM band, which can interfere with WLAN technologies such as:
To mitigate interference, Bluetooth 1.2 and later devices implement adaptive frequency hopping, which significantly reduces or eliminates conflicts with WLANs. Today, the majority of Bluetooth devices support this feature.
WPANs typically cover a range of about 10 meters.
WLANs are the primary focus of the CWNA certification and the entire certification track leading up to CWNE. However, it is still a critical component for the CWISA as many IoT devices connect using Wi-Fi.
WLANs are designed to cover homes, office buildings, or campus environments. They provide:
Mobility is provided because the user can move around within the coverage area of the access point, or even multiple access points, while still maintaining connectivity.
Nomadic ability—the ability to move from place to place and use the network, although active communications do not take place while moving—is provided because you can power on a wireless client device from any location within a coverage area and use it for a temporary period of time as a fixed location device. It is a given that unwired fixed connectivity must exist if the nomadic ability is provided.
Three primary roles exist, and WLANs play these roles in today's enterprise organizations:
Access Role
In the access role, the wireless network is used to provide wireless clients with access to wired resources. The access point remains fixed while the clients may move. The access point is usually connected to an Ethernet network where other resources, such as file servers, printers, and remote network connections, reside. In this role, the access point provides access to the wireless medium first and then, when necessary, provides bridging to the wired medium or other wireless networks (such as in a mesh network implementation).
Figure 2.1 illustrates the access role of a WLAN.
Distribution Role
In the distribution role, illustrated in Figure 2.2, wireless bridges provide a backhaul connection between disconnected wired networks. In this case, each network is connected to the Ethernet port of a wireless bridge, and the wireless bridges communicate with each other using the 802.11 standard. Once these connections are made, network traffic can be passed across the bridge link so that the two previously disconnected networks may act like one.
Core Role
The final role is the core role. In the core role, the WLAN is the network. This may be suitable for small networks built on-the-fly, such as those built at construction sites or in disaster areas; however, the limited data throughput will prohibit the WLAN from being the core of the network in a large enterprise installation. Future technologies may change this, but for now, WLAN technologies play the access and distribution roles most often.
WLANs cover a building or campus.
WMANs (Wireless Metropolitan Area Networks) differ from WLANs in that they are not usually implemented by the organization that wishes to use the network. Instead, they are generally implemented by a service provider, and then access to the network is leased by each subscribing organization. However, unlike wireless Wide Area Networks, this does not have to be the case. For example, 802.16-compliant hardware could be purchased, and frequency licenses could be acquired to implement a private WMAN, but the expense is usually prohibitive.
WiMAX is a commonly referenced WMAN technology. WiMAX is based on the IEEE 802.16 standard and provides expected throughput of approximately 130 Mbps in the latest specifications. In addition to the throughput speeds, WiMAX incorporates QoS mechanisms that help to provide greater throughput for all users and important applications using the network.
Private LTE networks are starting to appear using MulteFire and CBRS technologies. MulteFire operates mostly within the 5 GHz band in the FCC region; however, for loT, it leverages 800-900 MIz and 2.4 GHz for long-range connectivity. Some deployments are on 1.9 GHz, also known as sXGP in Japan. At the time of this writing, CBRS is still in the approval process with the FCC for use; however, this operates on 3.5 GHz.
WMANs cover a city.
Wide Area Networks (WANs) are usually used to connect Local Area Networks (LANs) together. If the LANs are separated by a large distance, WAN technologies may be employed to connect them. These technologies include Frame Relay, analog dial-up lines, cable, DSL, ISDN, and others. What they have traditionally had in common is a physical wire connected to some device that is connected to some other device (usually across a leased line) that is eventually connected to the remote LAN.
The wireless WAN (WWAN) is completely different because there is no wire needed from your local LAN to the backbone network or from the backbone network to your remote LAN. Wireless connections are made from each of your LANs to the backbone network. WWANs cover a region.
Examples of WWAN technologies include Free Space Optics, licensed and unlicensed radio, and hybrids of the two. For WAN links that span hundreds of miles, you may need a service provider such as AT&T microwave. For shorter links of a few miles, you may be able to license frequency bands or use unlicensed technology to create the links.
The key differentiator of WWAN technologies from WLAN, WAN, and WMAN is that the WWAN link is aggregating multiple communication channels together (multiplexing) and passing them across the single WAN link.
In summary, a WBAN usually covers about 1 meter. A WPAN usually covers about 10 meters. A WLAN covers a building or campus. A WMAN covers a city or an area around a city. A WWAN covers a region or spans the globe.
A Wireless Sensor Network (WSN) is a term used for a group of dedicated sensors that monitor and record their environment. These devices typically process the data with a central server that is part of the collection network. WSN's are utilized for monitoring sounds, temperature, moisture, wind, soil conditions, etc.
Wireless sensor networks range in size from a few sensors to hundreds or even thousands of nodes. Each node has its own wireless radio and electronics that are used to sense the environment it is monitoring. In many cases, these sensors are low power and operate on a battery; however, they may be connected to a dedicated power.
WSN's use a variety of communication methods to communicate amongst each other and to the central server. Many of these are wireless mesh technologies, and they operate in different bands. These mesh networks may be multi-hop or single hop, just like in WLANs. The most common standards in use for WSN's are Thread, Zigbee, Z-Wave, and LORA's LPWAN. Thread and Zigbee operate on within the 2.4 GHz ISM band at data rates up to 250 kbps. Z-wave operates at 915 MHz in the US and 868 MHz in the EU at lower data rates around 50 kbps.
The Internet of Things (IoT) is a term widely used for connected devices that do not utilize a traditional user interface, like a laptop, smartphone, or tablet. These devices can be found in industry, wearables, healthcare, retail, education, transportation, smart buildings, agriculture, and smart cities. These devices are changing the use cases and requirements for wireless network deployments; however, the impact they are having is massive, and they need to be accounted for.
LoT devices range in size from tiny circuit boards to large vehicles. These devices use a range of standards for communication amongst each other and to the network. For Internet communications, loT devices utilize IPv4 and/or IPv6 and connect to some form of a directory server or cloud server for the user of the service to manage them.
Some of the standard loT protocols utilized for wireless communications are as follows.
Short-range wireless:
Medium-range wireless:
Long-range wireless:
The growth of loT devices in industry continues to drive new network and wireless demands. In many cases, worker safety is increased by allowing the workers to control the devices remotely. This is creating a demand for wireless connectivity in locations that previously have not needed connectivity. Manufacturing plants are purpose-built for creating products and the materials used can create challenges for wireless frequencies. Many short-range wireless communications are used in manufacturing such as NFC, RFID, and Wi-Fi as well as Bluetooth.
Connected Vehicles is a term utilized for vehicles that are connected to the Internet. A connected vehicle is used for cars, trucks, and SUVs that are on the road. Communications to the network may require a data plan from a carrier to utilize the GSM. Safety applications in the US use a dedicated short-range communications (DSRC) radio that operates in the 5.9 GHz for low latency communications. Some connected vehicles are also offering WLAN services within the vehicle itself.
The communications with the vehicle to others are often referred to with a V2x nomenclature (Source: Wikipedia):
Connected vehicle technologies focus on reducing congestion on the road, improving safety, and lowering greenhouse emissions. NHTSA estimates that a connected vehicle safety application that helps drivers safely negotiate intersections could help prevent 41 to 55 percent of intersection crashes. Another connected vehicle safety application that helps drivers make left turns at intersections could help prevent 36 to 62 percent of left-turn crashes, according to NHTSA.
To properly administer the wireless network, you need to understand the use cases for wireless technologies and how they continue to evolve. When 802.11 was created, the use cases were limited as the technology was still very young and expensive to deploy. Over the years wireless technology has become more cost-effective and easier to deploy. Most smartphones are more powerful than desktop computers were when 802.11 was released.
Residential environments were one of the first adopters for wireless technologies. Over the past decade wireless in the home environment has evolved from a nice to have to become a necessity. Connecting to the internet has never been easier than it is with wireless technologies. So much so, that everything in the home now has uses for that connectivity. Consumers started with laptop computers connecting, then moved into smartphones, and tablets for direct interaction with Internet and cloud services.
Technology has continued to evolve from basic internet access into audio, video, and virtual reality. Wireless devices throughout most homes are utilized as speakers for music from both local and internet sources, many of which respond to audible commands for content. These devices can be programmed wirelessly to play at scheduled times as well.
One-way and two-way video solutions continue to become more prevalent. One-way video systems include doorbells, security cameras, and nanny cams. Some of these send data to a central server in the home, others to the cloud wirelessly. Two-way video systems allow for wireless communication to people at a gate, outside the front door, or even around the world.
The loT has continued to bring wireless to more and more devices throughout the home for a truly automated home. From light bulbs that can change the brightness or color by wireless commands to power outlets that can be turned on/off or even scheduled. Even thermostats are utilizing wireless to remote sensors to adjust the temperature in the home and even analyze the usage of electric and save money by adjusting when people are not home. Autonomous vacuum cleaners are now using wireless technology to map floor plans and even be told when to start or how long to run for. Irrigation systems can come equipped with Wi-Fi for remote management as well.
Garage doors and home security systems (alarms) can be managed remotely from your phone. Some refrigerators have options to connect wirelessly to notify you when you are running low on things...the list goes on and on, and these are in everyday consumer homes.
More technically advanced homes have digital displays that change what is shown based on who is in the room, TVs can recommend content based on who is in the room, alerts can be created for the sick or elderly if movement in the home is not detected for some time. These are just a few use cases today, and they continue to evolve, and more uses are created daily.
Many modern cars come with wireless communications as well bringing the need for wireless in the garage and outside the home to another level. As auto-pilot and driver-assist technologies evolve these cars connect to the internet to download updates in the interest of driver safety.
Some of the challenges of operating a network in a residential environment include:
Wireless technologies have been adopted in retail in mass since the early 2000s. Inventory management, location awareness, connecting with customers, and point of sale are just a few of the ways wireless is making the retail experience better. Designing and administering a wireless network in a retail environment requires an understanding of the use cases and the environment around the location(s).
Tracking and knowing where your inventory is located is one of the most important things in retail. The entire goal is to get product into the hands of the customers. Wireless technologies make this easier in retail environments. Wireless is used to track inventory as it moves into the store, onto the shelves and as it is sold to consumers. This can be done using barcode scanners or RFID tags.
Wireless also enhances the retailer's ability to interact with the customer in more meaningful ways. Consumers bring their own wireless devices into retail environments. The interactions can now happen via applications on these consumer devices. The retailer can utilize this application to push advertisements, sales, coupons and more to the users. These devices can also be utilized to understand the location in the store, so if a customer is in the area around a product, the application may push a discount code for that item. The retailers can also utilize the location information, and the path customers take through the stores to optimize placement of products.
Retail locations are increasingly enabling staff to bring the checkout capabilities to the customer directly and increasing the quality of the interactions by leveraging wireless technologies for point of sale, receipt generation, etc. Consumers are even using wireless technologies to pay from their smartphones, instead of having to bring a wallet with them everywhere.
Restaurants have similar use cases for wireless technologies as other retail environments. Bringing interactions in the drive-through outside on busier times. They have been utilizing public wireless technology to increase the dwell time and likelihood the consumers will purchase more food and beverages for a while now. Many sit-down restaurants are even putting kiosk at the table for games, entertainment, ordering, and even payment to turn tables over at the customer's pace.
Operating a network in a retail environment in unlicensed spectrum requires an understanding of everything within your location and what may be interfering from locations surrounding your location. Some of the challenges in retail include:
Wireless technologies have been adopted in education over the past decade. Utilizing technology has enabled schools to provide new ways to connect with students while preparing them for the digital world.
Education has extremely different use cases from retail or other environments. Students are already using wireless devices at home and transitioning this familiarity into the learning tools can really help teachers connect with students. Teachers are able to share their screen directly with the entire class, or one to one when reviewing concepts. Seamlessly transitioning from lectures to individual assignments allows a more content-rich environment than what a textbook alone could provide. Each student having their own device means they have the option for personalized content and lesson plans from the teachers as well. These are just some of the classroom uses.
Wireless technology also allows students and teachers to be connected directly to the Internet, allowing the use of cloud services and research from anywhere on campus. This allows the students to engage in educational activities from anywhere life brings them.
Some of the challenges of operating a network in an educational environment include:
In college and university environments, wireless technologies include everything from K-12 and add in elements of enterprise networks and home networks. In some universities the sports arenas, football stadiums, and hospitals are also part of the campus and wireless network.
Every year when college students come into student housing, they bring with them every device type imaginable, as they tend to be early adopters of technology. The latest gaming consoles, lighting, speakers, and more. These devices worked at home, so naturally the expectation is that they will work on the college campus as well. This can be a challenge for the administrators of these networks in some environments, as they are bringing the home devices onto an enterprise-grade network built for a high density of devices, and they tend to have a lot more interference than in residential neighborhoods.
When hospitals are part of the campus, they bring all the HIPAA and hospital requirements as part of the campus. Stadiums, Arenas, and public areas also bring their own requirements as well. Both are covered in their respective sections; however, these are also large parts of the campuses they belong to.
Some of the challenges to operating a network in Higher Educational environment include:
Agriculture historically has not been thought of as having a use for wireless, since when do plants or cows need wireless? Even in agriculture, wireless technologies are becoming more and more valuable. Enabling network communications around the property using point to point or cellular technologies allows for devices to talk to each other. For crops rain or soil sensors can be placed to know when the irrigation system needs to run, where it needs to run and when it should stop. This environmental data can be used to save on water usage or optimize growing conditions. The soil sensors will also let the farmers know when the soil needs to be enriched with fertilizer.
Wireless technologies are utilized in tractors for automation. John Deere released a tractor that can navigate fields without a driver steering it. This eliminates the possibility of error and allows the farmers to focus on other important tasks while the tractor plows the field or harvests the crops.
Wireless cameras are often used to monitor livestock activities and locations on farms; however, this is not the only wireless system in place. RFID tags can be used on horses, cows, or other livestock to track their locations throughout the day, which field they are in, and their movements.
Drones are also commonly used as a means to put eyes over large areas quickly. This helps the farmer know if where to spend time on a given day.
Some of the challenges of operating a network in agricultural environments include:
Wireless technologies have enabled communications in cities like never before. Cities are leveraging these technologies for utilities, public safety, automation, and public access. Cities are taking advantage of public/private partnerships for deployments with carriers, service providers, and in some cases building out their own networks.
Smart city applications include demand-based road tolling, pollution monitoring, and even city-wide municipal Wi-Fi in some cases.
When traveling around, you may notice people are using wireless technologies to stay in communication with friends and family. Wireless technologies in urban areas are used for nearly everything, as running cables can be expensive and in some cases not possible. In addition to connecting consumer devices, public safety is one of the most common use cases. Public safety has evolved from connecting the police and fire department radios into providing laptops to the police with direct access to the tools to validate identities and even look up individuals for outstanding violations or safety concerns. Cameras can be connected for real-time monitoring as well. (The world of RoboCop is becoming a reality.)
Utility companies are able to install meters that communicate back wirelessly, either by utilizing Wi-Fi in the home, on the pole, or cellular, to eliminate the need for a manual reading every month. In some markets, the power companies will offer incentives to businesses at peak times to consume less electricity. The company sees demand on the grid and will wirelessly trigger an alert to participating entities to reduce consumption. Many of these locations then use wireless technology (Zigbee) to tell the thermostats to go into power save mode if they can. All seamless to the individuals on the property and without human involvement.
Cameras are utilized to monitor traffic conditions and traffic flows. Traffic flows can also be monitored wirelessly counting the number of devices passing by a specific location. This enables cities to more effectively time traffic lights, plan public transportation routes, schedules, and methods of transport.
Some of the challenges of operating a network in urban environments include:
Hospitals have wireless workstations and tablets similar to office environments and many that are unique to them. Nurses require the power of desktop computers in a mobile platform, many times with other systems on them. Engineers have created a workstation on a cart that has wheels custom-built for this purpose. Communication within the hospitals is absolutely critical, and almost all of these are wireless. Voice communications using VolP phones or badges and secure text messages between the care teams.
Medical records are, in many cases digital and accessed from wireless devices. This enables the care team to quickly pull up a patient's medical history to ensure a proper care plan is in place.
Health Care has continued to use wireless technology to collect information and get this information to those who need it. Patient monitoring systems are able to directly notify the hospital staff if a patient's vital signs change, in the room or if they are mobile. This enables the care team to know how to respond in a critical event and increase the likelihood of saving a life.
In some cases, doctors can work remotely. Tests can be run, and the results can be transmitted to the patient's doctor in another location or even around the world. Doctors can utilize two-way video technologies with patients to communicate the results of these tests, diagnosis and recovery plans. Most of this was not even possible a few years ago; however, today wireless and smartphones have enabled this revolution.
Wireless is also utilized by the patient's family and guests while waiting within the hospital. Many times, bringing their technology from their home or office into the hospital to facilitate work or entertainment while the patient is treated, asleep, or cared for.
Some of the challenges of operating a network in medical environments include:
Office buildings are getting smarter, increasing automation, and measuring their performance. Wireless connectivity, sensors, and devices are deployed throughout many buildings, and ones that have pervasive coverage are known as "Smart Buildings."
The sensors in these smart buildings collect data from various points, communicating it back to a database, and an analytics system runs on this information to automate action. These include lighting, HVAC, electric, cameras and much more. This enables the building to only provide services while people that need them are present and this can save property owner's money.
Security systems in smart buildings are wireless as well. NFC, RFID, and Wi-Fi are commonly used to grant access into the building or specific areas within the building. They can also be used to track the location of people within the building, which is helpful in the event of an emergency to let first responders know how many people are in the building and where they are.
Some of the challenges of operating a network in office environments include:
Wireless technologies have been adopted in hospitality in mass since the early 2000s. It started with guest access and then transitioned into more use cases. Designing and administering a wireless network in hospitality requires an understanding of the use cases and the environment around the location(s).
Hotels have been on the bridge between home and office buildings for a while, bringing technology from both environments into these locations for safety and convenience. The wireless experience within a hotel starts when you walk into the building with Wi-Fi being available throughout the building. This Internet access allows guests to connect back to their home, office, or to any other resources they may desire.
Hotels utilize wireless for the door locks within the room. In many cases, you can utilize an app on your phone to by-pass the front desk entirely to get to your room, saving you time. Once you are in the room though, wireless can be used to put content from your device onto the TV, to print documents, or even to charge your phone with wireless power (Qi).
The hotels themselves use wireless location services to track luggage carts. Wireless communication is also used to know when the maids are complete with a room or when supplies are needed. The same wireless communication system can be utilized by guest services to get the closest person to assist a guest in need. Check-in kiosks are also able to be moved around for both conferences and hotel check-in purposes.
Some of the challenges of operating a network in office environments include:
Industry is extremely diverse, and wireless is used in most of them. In manufacturing, machines continue to automate manual tasks. Wireless is used to control many of these machines and wireless sensors monitor environmentals of the machines and let engineers know when maintenance may be required. This may be a change in how the machines sound, heat output, or that they have stopped working.
Automated carts and forklifts are used in many cases to move materials around the plant. These can be equipped with wireless cameras so that someone central can monitor the activities and stop the devices if they are malfunctioning. In some environments wireless controls of vehicles can be utilized with the cameras to keep workers safe in dangerous environments; this is extremely helpful in mining activities.
Wireless tracking tags have multiple uses. They can be used to track inventory through a warehouse or manufacturing plant. This enables the business to know how much of any given item it has on hand and when supplies need to be refilled or are overstocked. Location can be for worker safety, for example, if a mine collapses tags for each worker will quickly let the safety team know how many people and who was working in a given area.
Some of the challenges of operating a network in office environments include:
Stadiums have historically been a problem area for wireless technologies. When 20,000 - 100,000 new people show up to an area they are not there every day it stresses the network capabilities. Stadiums added DAS (Distributed Antenna Systems) to help offset this problem through the early 2000s. With the use of smartphones, data demands rapidly increased, and these venues brought in Wi-Fi, BLE, and other wireless systems to complement the DAS.
These locations tend to offer free Wi-Fi to the end-users to better support in venue applications and league-sponsored content. In-venue applications range from marketing for sponsors, wayfinding (maps to help you find where you are going), instant replay, and highlights from the event. Food and drink can be purchased, then brought directly to your seat, without the need to stand in line, this can be done from your mobile device or by an employee of the venue.
In building communications are done wirelessly as well, allowing security to stay in contact with each other. This allows information to flow to all areas within the building within milliseconds instead of requiring people running from location to location.
Some of the challenges of operating a network in stadium environments include:
In this chapter, you learned about the different wireless network types or categories and the planning and administration tasks common in various vertical markets. You explored WBANs, WPANs, WLANs, WMANs, and WWANs. Next you explored the common vertical markets where loT solutions may be implemented with wireless technologies and the challenges faced in each. This information and that presented in the preceding two chapters provides a strong foundation as you move forward to planning of wireless loT solutions in Chapter 4.
As engineers, we love to just jump right in and start designing or building something when presented with a problem to solve. In school, the professors give the students well-defined problems that usually are limited in scope and have one right answer, in order to teach particular concepts. Alas, real-world projects are never that neat and tidy.
Most of the time, the problems are more complex, not very well defined up front, suffer from scope creep during (or even after) the project, and specific conditions such as budget and cabling access may make the theoretically "right" answer an impractical one.
The reality of system design for complex systems is that there is never one "right" solution. However, there will inevitably be "better" and "worse" solutions. But how does one evaluate the quality of design options and solutions? Fortunately, as much as the design remains something of an art, there is also a science for selecting better vs. worse solutions. The science of complex system design to arrive at the "best" solution, or at least a "better" solution, is to approach complex system using a methodical process to guide one's thinking both qualitatively as well as quantitatively.
Once a system design engineer can frame the design problem properly, he or she can actually properly process and interpret the information provided by the customer, stakeholders, equipment vendors, and their own organization. The solution administrator can identify the right questions to ask when information is missing, understand the ramifications when the scope is changed, and do all the other tasks required to create a functional system that meets the needs of the stakeholders.
A system design engineer likely will not have detailed expert-level knowledge of all the subsystems and components that comprise an overall system. However, a system design engineer will need to understand these systems sufficiently to understand how they interact, relying upon subject matter experts (internal engineers or external equipment vendors) for specific details. A system design engineer is therefore analogous to an orchestra conductor; the conductor need not necessarily know how to play any of the instruments, though does need to know the types of sounds that each instrument creates and how those instruments work with each other to create a cohesive and tonal musical work.
Over the last several years, there has been a lot of hype about the IoT, which is projected to be an array of billions of individual networked appliances. Despite a significant amount of initial irrational exuberance, this is a field that is clearly growing and for which there are a lot of companies investing a lot of money to propose a diverse array of use cases and applications. We have moved from the early irrational excitement phase to the more balanced rational excitement phase with matured models, methods, and solutions for IoT use cases. Some use cases that have been implemented for IoT applications include home monitoring and automation, commercial asset monitoring, public safety, industrial factory, and warehouse monitoring and automation, city-wide network access and surveillance, etc.
Integrating these types of devices with new or existing wireless networks is one of the core challenges for a CWISA.
IoT essentially describes mechanisms and devices that consist of a combination of sensors and actuators that communicate over and/or are controlled by a wired or wireless network, as opposed to simply operating autonomously. In this context, a sensor is a device that measures something in the environment, such as a video camera, microphone, thermometer, barometer, motion detector, etc. An actuator is a device that does something to the environment, such as a speaker, a flashing alarm light, a gate opening mechanism, etc. One or more sensor and/or actuator functions are often bundled into the same physical device to perform a given task. A thermostat is a good example of this, as a thermostat performs both the function of measuring the temperature of the environment and the function of activating or deactivating the HVAC system to adjust and maintain the temperature to the desired setpoint. A "smart thermostat" is a thermostat that also communicates with a server on the Internet, which can be accessed from a web browser or smartphone app so that the desired temperature setpoint can be manipulated from a remote location, and not just physically at the thermostat itself.
Connecting a series of sensors and actuators to either a private network or the Internet clearly creates numerous potential use cases that were previously impossible or impractical. Sensors and actuators no longer need to be physically co-located or directly "wired" to each other in order to work collectively. Furthermore, the algorithmic logic (i.e. the software) of how to use sensor data to determine how to adjust the actuators need not be physically located with the sensors and actuators themselves, giving developers the freedom to dynamically refine their algorithms from a central "cloud" location without ever having to touch the physical location.
Alas, many IoT offerings have been solutions in search of a problem, and many more have made wild and grandiose marketing claims that cannot (at least yet) be supported by the technology. There are also a whole new set of potential problems that IoT solutions introduce, especially in the realms of security and privacy, as early adopters of these technologies have unfortunately learned the hard way.
The IoT offerings that will ultimately prove successful are those that understand the requirements and constraints of an actual use case and have selected the most appropriate design parameters that address those requirements and constraints.
This is, naturally, a chicken and egg problem, as new technologies open up new use cases that were never conceived of before. Prior to the launch of the iPhone in 2007, only fiction writers envisioned having pocket-sized devices for accessing the combined knowledge of 6000 years of human civilization. Could we have envisioned watching TV on such devices, or using such a device to write a chapter in a book about them? Marketing hype should therefore not be ignored, as it opens up a new world of potential applications, getting people to start imagining and envisioning new possibilities. Nonetheless, the marketing claims should be screened with a very critical eye, so that when a new use case is identified and a project is launched, realistic expectations can be set and achieved.
A CWISA, therefore, needs to start with identifying requirements and constraints or reviewing the requirements and constraints that have been provided. In this context, the requirements dictate what the system has to do in order to work properly, while the constraints dictate what the system has to work around in order to meet the requirements. Thus, requirements define the needed functionality, and constraints limit the viable choices of potential design solutions. The difference between requirements and constraints is often subtle, and the stakeholders inevitably fail to distinguish these and therefore present both requirements and constraints simultaneously. Nonetheless, it is important to get these properly distinguished up front. Getting requirements and constraints properly identified and categorized at the outset will enable the ability for both identifying and evaluating the quality of alternative design options, as well as having a mechanism to properly manage scope creep.
Once the requirements and constraints are appropriately characterized, then begins the process of identifying the design parameters, which are how the system meets the requirements and constraints. There are typically multiple options, so the options need to be evaluated in a systematic way to determine which solutions will provide a more robust (i.e. "better") design. A more robust design can more easily accommodate changes (i.e. scope creep), which is an inevitable part of the process.
At the CWISA level, you need not understand the complete details of requirements engineering as you are tasked only with identifying and complying with system requirements and constraints. This includes understanding common requirements and constraints and knowing who the stakeholders are so that you can verify any needed information from them. Then you must select appropriate wireless IoT solutions based on these requirements and constraints. In some cases, the solution selection will be done for you with a complete set of design documents describing the entire target installation. In others, you will be left only with requirements and constraints and must select the best solution.
The basics of requirements engineering are discussed throughout the remainder of this chapter, but more depth can be found in the CWIDP and CWIIP exam learning materials.
For wireless systems, there are a set of requirements and constraints that are encountered on virtually every project. A CWISA must be attuned to seeking out these specific requirements and constraints to ensure they are properly captured up front.
Security
Any wireless system will have some type of users that are being serviced. These users may be human (e.g. Wi-Fi client devices) or machine (i.e. surveillance cameras, loT sensors, gate openers, panic buttons, etc.), depending on the application. More complex applications may have multiple types of human and/or machine users. It is therefore essential to define the access control methodology of how each type of user shall be authenticated as being "valid", as well as the client isolation policies of how associated users of each type are allowed to interact, or not interact, with other users on the same network. The access control and client isolation policies are commonly distinct for different classes of users (e.g., guests, staff, in-house loT devices, external consumer appliances, etc.).
Security is always about defense-in-depth. At every point in the design, security needs to be considered. This potentially means having multiple and redundant layers of security in different sections of your system so as to ensure that a breach of one line of defense doesn't compromise the whole system. Nonetheless, there is an inevitable tradeoff between security and ease-of-use. The more secure a wired and/or wireless network is made, the harder it becomes for a user to connect to the network and to perform its intended tasks. Depending on the application and the sensitivity of the data, more or less security may be required. For certain types of data, especially financial (e.g. FINRA, PCI-DSS (a data security standard that may impost security constraints on a retail payment card processing network - it is not a regulation, it is a standard)) and health care (e.g. HIPAA), there are government and industry standards of security that must be conformed to and which are periodically audited for compliance.
Access control on a wireless network is usually implemented in one of the following manners:
Open access: This method requires no credentials, so any client device can connect to the network. This method is generally not advisable, as there is no control over what client devices can connect. It is uncommon for use in production IoT solutions. Additionally, the wireless traffic may be unencrypted, so messages can be intercepted in the air, or the connection may be subject to man-in-the-middle (MitM) attacks. Generally, open access is only used for guest access in hospitality Wi-Fi or a hotspot Wi-Fi environment, where the intention is to encourage client devices to access the network. It is not appropriate for wireless IoT solution implementations.
Personal (Password/Passphrase): This method requires the connecting client device to possess a particular password or passphrase. This may be pre-configured into the device. The password/passphrase is used as a seed to set up symmetric encryption between the client device and the wireless access point. For IoT, it is more commonly used with Wi-Fi than any other protocol. In Wi-Fi, this is generally implemented with WPA2-Personal, which uses a passphrase along with the MAC addresses of the access point and client device to establish a unique symmetric 128-bit AES encryption key between the Wi-Fi access point and the client device. This methodology is often useful for IoT appliances but has some intrinsic weaknesses. If the association traffic between the access point and the Wi-Fi client is intercepted, it is possible to derive the encryption key for that client device as well as potentially deriving the passphrase in a dictionary attack. Additionally, passphrases often are accessible by human users and thus may be shared intentionally or unintentionally.
Enterprise (Authentication Server): This method requires the connecting client device to authenticate to a separate authentication server before being allowed on the network. In Wi-Fi, this is generally implemented with WPA2-Enterprise, which uses digital certificates or other variations of pre-loaded credentials on the client device (supplicant) to allow an authentication server to authorize the device to access the network through the access point (authenticator) and establish a unique symmetric 128-bit AES encryption key between the Wi-Fi access point and the client device. While this method provides excellent security, it can be laborious to set up and may be impractical in networks with large numbers of guest devices. Additionally, many IoT appliances still don't currently support this method of authentication. However, IoT devices can support certificates for authentication as well as other enterprise-level authentication methods, depending on the protocol in use.
Application Layer Security: In many cases, IoT devices also rely on Application Layer security. In these cases, access is authenticated to the application/service and data is encrypted at the Application Layer before it traverses the network stack and is transmitted on the network. This means that, even if the lower layers do not implement encryption, the transmitted data is still unreadable.
When the network needs to support different classes of users or devices simultaneously, it is common to segment the one physical network into multiple virtual local area networks (VLANs). VLANs are typically (though not always) blocked from interacting with each other, so as to keep the different classes of users isolated from each other. Within each VLAN, the access control and client isolation policies will be different. For Wi-Fi connections, a separate SSID for each VLAN is often used, though some systems support dynamic VLANs where one SSID is used but the client device is placed on a particular VLAN based on feedback from the authentication server.
As an example, a hotel Wi-Fi network may consist of the following types of VLANs:
Other IoT networks may not inherently support VLANs, but VLANs are still frequently implemented on the uplink of the gateway. This allows for the use of multiple gateways, and each collection of devices connected with each gateway can be segmented into different VLANs on the remainder of the network.
One of the critical sub-requirements of network usage is to understand how much bandwidth is required, and what maximum level of service, commonly known as the service level agreement (SLA), is to be provided to each client device. The amount of bandwidth will generally be critical to actual performance of the system, as well as the perceived performance of the wireless since the wireless portion of the system inevitably gets blamed whenever there are performance issues, even when the root cause of the problem lies elsewhere. Internet bandwidth is also usually the highest ongoing operational expense (OPEX); thus, it is critical to be in the Goldilocks zone, such that there is enough bandwidth available for the application with reasonable margin, without having too much bandwidth that unnecessarily drives up OPEX.
In any telecommunications system, there is always one part of the system that will be the slowest, dictating the overall throughput capacity. This is known as the bottleneck of the system. Since budget is virtually always a constraint, the bottleneck should always be the most expensive part of the system. For wireless systems, the most expensive component is always Internet bandwidth to the system, due to very high ongoing operational costs. Ironically, Internet bandwidth is typically one of the easiest items to upgrade after the system is deployed, as continually increasing demand means that service providers are generally investing in their own infrastructure over time to both increase overall capacity and decrease subscription costs. When considering the Internet connection, for example, Internet bandwidth provided to the system should always be designed to be the bottleneck. The overall internal wireless capacity should always exceed Internet bandwidth capacity.
If the application is to transmit small amounts of data from IoT sensors once every several hours, only minimal bandwidth is required. Conversely, if the application is Wi-Fi in a student housing environment, where individual residents are likely to be streaming different 4K videos to multiple devices simultaneously, investment in adequate bandwidth is critical. Of course, we are beginning to see high-resolution streaming video from IoT-classified devices. The purpose of the increased resolution is to aid in superior computer vision processes.
The first step, therefore, is to determine the appropriate SLA per user. As will be seen later, this topic generally will be a critical topic to discuss with various stakeholders and becomes a sub-requirement of both usage (functional requirement 1 (FR1)) and capacity (FR3). For machine clients (e.g., IoT sensors, video surveillance cameras, etc.), the amount of bandwidth per client and the expected quantity of simultaneous clients on the network should be straightforward to quantify. For human users (e.g., Wi-Fi for guest access), this becomes more subtle and complex, as the requirement has components that are both quantitative (e.g., bandwidth required for Netflix streaming an HD movie to a client device) and qualitative (e.g., the network is perceived to be fast enough by guest users). Many hospitality environments have tried to monetize this with tiered service plans, where customers are offered a free service at a low SLA but can purchase a higher SLA optionally. In practice, very few customers ever purchase the higher-level SLA, yet will complain bitterly if the "free" SLA is not good enough.
Another complication is that the required SLA is likely to increase over time during the life of the wireless system, as applications like 4K and 8K video streaming to client devices become more prevalent. For typical guest access applications, 2-3 Mbps per user (both upstream and downstream) is minimal, with 5-10 Mbps per user being typical. For student housing, specifications of 20-30 Mbps per user are not unusual. These thresholds are inevitably going to increase over time.
For IoT, in most cases, things are very different. The required throughput per device is typically measured in either Bps or Kbps (kilobits per second). Some devices require no more than 8-12 Bps and others may require as little as 1-2 Kbps. However, they may also require this throughput to be available at any moment, at all moments, or at fixed time intervals. All of this must be known to effectively determine capacity requirements.
Additionally, some requirements, like bandwidth, are likely to change during the life of the wireless system. These need to be understood and identified up front such that the system as deployed is capable of meeting those evolving needs with no or relatively minimal changes.
Once the SLA is determined, the amount of bandwidth to be supplied to the system needs to be quantified. In early telephony, engineers realized that not everybody uses their telephone to be on a phone call simultaneously, and thus they could share the capacity across their subscriber base. Hence, an oversubscription ratio is defined to quantify how much real capacity is needed to provide service.
For modern networking with human-controlled end devices (laptops, tablets, mobile phones, etc.), the same concept applies; statistically, not every client will consume its maximum SLA simultaneously. Accordingly, the required total bandwidth needed to adequately meet the needs of the use case can be shared, and thus is significantly less than simply multiplying the number of users by the SLA per user. Granted, this is a single "fudge factor" based on what is truly a fairly complex statistical analysis, but it turns out to be a reasonably accurate guideline. For Wi-Fi in residential apartments and hotels, typically a 25:1 or 20:1 oversubscription ratio is used.
As an example, let's take a hotel where up to 500 simultaneous client devices are expected during peak usage times, and the property wants to provide a maximum SLA of 10 Mbps/device. With a 20:1 oversubscription ratio, the required bandwidth to the property is:
500 users × 10 Mbps per user / 20 = 250 Mbps
For Wi-Fi in student housing, which is significantly more bandwidth-demanding, a 10:1 ratio is more appropriate. Conversely, Wi-Fi in an assisted living facility usually does not get a lot of usage, so a 30:1 or even 40:1 oversubscription ratio is appropriate.
A student housing property and an assisted living property are often architecturally similar, so consider comparing two such properties with the same number of residents, same SLA, and even the same number and layout of APs. Nonetheless, the student housing property requires approximately 3x - 4x the amount of bandwidth as an analogous assisted living property, simply due to the differences in how the network shall be used.
Constraints in terms of both the potential bandwidth available at the system location as well as the operating expenditure (OPEX) budget for the bandwidth will influence the results of this calculation. In the example above, it was determined that 250 Mbps was required to provide a 10 Mbps SLA to 500 simultaneous users. However, if the service provider can only get a 100/10 cable circuit for this property, the SLA of 10 Mbps is not realistically achievable.
One can sometimes shave a little from the oversubscription ratio assumption (e.g., use 25:1 vs. 20:1), yet the SLA would still need to be lowered to at most 5 Mbps downstream and 1 Mbps upstream. For a hotel, this might be an acceptable compromise. For a student housing property, this would likely lead to unhappy residents. In the latter case, a more expensive bandwidth alternative would be needed, such as using two to three cable circuits with a WAN load balancer or a fiber connection from a different service provider, both of which may add significant capital expenditure (CAPEX) and OPEX costs.
The requirement for what speed to provide per user will need to be balanced against the budget constraints of your customer.
Remember that the throughput the user experiences will be based on the airtime available to that user (or device) on the wireless medium. Therefore, total airtime utilization required by the various devices and applications becomes the most important factor related to capacity in the channel.
Now that we've discussed oversubscription to provide the throughput needed for human-controlled end devices, let's discuss it in relation to machines. When it comes to IoT devices, it's actually easier to determine the needed bandwidth with more accuracy because machines are, well, not human. Humans act sporadically, almost randomly, and we are attempting to determine the bandwidth required for the variable nature of human actions. Machines act predictably because everything is a pure algorithm. That is, we can know exactly how often a machine will communicate and, for its various communication types, what size the communication will be. This reality does not remove the need for capacity planning, but it does bring more exactness to it.
Are there exceptions to this? Definitely. The IoT devices that are monitoring human activities will have the same variance of communications (or more) as human-controlled end devices. For example, a motion sensor detects motion in the area and will be triggered by animal, human, or human-controlled machine movement in the area (with the exception of fully autonomous robotics with direction finding and obstacle avoidance algorithms). Additionally, location tracking IoT solutions can be impacted by the same variable human element. In other words, if an IoT device transmits only when an event occurs and the event is not predictable, the same variability that is there in human-controlled end devices will be there for those IoT devices.
Given that many IoT deployments will include both fixed communication devices and variable communication devices, the network planner can calculate the capacity requirements for each category separately because they will be very different in relation to potential oversubscription. There is less need for variability (with the exception of future growth and accounting for periodic interference and retransmissions, if implemented) with the fixed communication devices, and the network planner can perform heavier oversubscription calculations for the variable communication devices. Performing these calculations separately and building a network that accommodates both can reduce CAPEX/OPEX costs while still providing sufficient bandwidth and even allowing for future growth.
How to Perform Capacity Planning for an Uncertain Future? There are many techniques that can be used for capacity planning, here are a few:
Business Forecasting: By evaluating the plans and events (both external and internal) of various departments in the organization, you can predict changes that may be requested for your IoT network. You can weight these predictions based on likelihood and determine the likelihood based on project status information, such as whether the project has been greenlighted, budgeted for, etc.
Trend Analysis: For existing IoT networks, you can perform trend analysis. This process involves evaluating historical growth to predict future growth and can be accurate within 20-30 percent if enough historical data is available. The accuracy can be increased by combining it with business forecasting. However, this is not very useful for greenfield IoT implementations with no historical data.
Statistical Forecasting: This method involves taking trend analysis and increasing accuracy by factoring in external events, seasonality, acquisitions and mergers, and many other likely future events. Statistical forecasting is the combination of trend analysis and business forecasting that uses statistical models for prediction, making it more accurate than simply the two combined.
Traffic Analysis/Modeling: For existing networks, you can perform traffic analysis to determine the actual traffic traversing the network and, therefore, the portion of capacity consumed. This information can be used to predict capacity requirements for upgrades and additions. Traffic modeling is different; it involves simulating network traffic based on known parameters such as the data rate of the links, payload sizes, MAC and PHY overhead of the protocol, and transmissions per time interval. Tools such as MATLAB or dedicated networking modeling software like OPNET can be used. Alternatively, you can perform the calculations in a spreadsheet with a formula like the following (per device aggregated):
transmission duration = (payload size + MAC header size + PHY header size) / Data Rate
For example, with a transmission using 802.15.4 OQPSK modulation having a 250 Kbps data rate, a 50 byte payload, a 20 byte MAC header, and an 8 byte PHY header, the calculation is:
transmission duration = (50 + 20 + 8) / 250,000 = 0.00128 seconds = 1.28 milliseconds
This means that you could have nearly 400 devices sending such a transmission every second and still leave 50 percent of capacity for network management communications and contention overhead (wait times before transmitting). The 400 devices would consume approximately 0.512 seconds for each round of transmissions.
In reality, a lot of time can be lost to silence with contention algorithms in such a small window, but this illustrates the basic point.
What-If Scenarios This thinking tool can be used to identify likely scenarios that could play out in your IoT network. It can be used in combination with traffic modeling to predict the capacity demands depending on different scenarios.
Expert Judgment The final method we will mention involves experts — those who've been there and done that before. Their expert experience can be invaluable in predicting the capacity requirements for greenfield IoT deployments. Based on their experiences, they will bring many additional what-if scenarios to the table and have an understanding, almost subconsciously, of how interference and adds/removes will impact the capacity requirements in a given scenario.
With these techniques in mind, you will be better prepared to tackle capacity planning for your wireless IoT solutions.
While we cover wireless IoT system design completely in the CWIDP learning materials, at the level of the CWISA, you should be aware of the basic concepts that go into the design. This section will provide an overview of the process.
The general process for correctly generating a successful design is as follows:
The use case itself is usually a high-level conceptual vision, driven by an organizational need that is not being fulfilled. A particular solution may or may not be achievable, especially once constraints enter the discussion, but that doesn't change the need for it. If no solution is available off-the-shelf, the benefit of today's IoT is that a custom solution can often be built from components, standardized, and then rolled out.
Often, the barrier to such applications is either too much cost and/or the lack of appropriate technology. In wireless projects, the barriers are often related to the costs of infrastructure backhaul wiring. For instance, if the need is to create a city-wide surveillance network, the cost in terms of money, time, and manpower of ripping up streets and sidewalks to run cable or fiber to each location are likely prohibitive. Wireless technologies eliminate the need for much of that data cabling, but the cost of wireless backhaul may still be significantly higher than the costs of the cameras themselves. As technology progresses in terms of both higher capabilities and reduced costs, such wireless applications become more cost-effective.
Once the overall need of a use case is identified, the typical first steps are to perform feasibility analysis, using a very rough set of initial requirements and constraints and evaluating these against a shortlist of potential technology solutions and their costs. This enables the generation of a rough estimate of costs, time, and manpower. The goal of this analysis is to determine whether the project is even feasible. This is typically done as a budgeting exercise, though it may also involve performing some initial technology evaluation and working with equipment vendors to understand current and potential capabilities and limitations.
If the high-level use case seems feasible, the next step is to gather the complete set of needs of all the potential stakeholders, with the eventual goal of converting such needs into requirements and constraints. This process is much harder than it seems, yet it is also the most important. The total set of stakeholders is not always obvious at the outset, but even when all the stakeholders are identified, do not expect them to be able to fully identify, understand, or articulate their own needs. Even if you are not the network designer, knowing who the stakeholders are is important. This knowledge allows you to communicate with the appropriate individuals or groups when clarifications or modifications are required.
Nonetheless, up-front investment in capturing the needs of the stakeholders is critical to the eventual success or failure of the project. One of the biggest sources of scope creep results from a failure to identify all the stakeholders up front, or a failure to adequately capture all their needs. Stakeholders that are late to the table are the least likely to have their needs addressed, and thus are most likely to be dissatisfied with the result. Furthermore, attempting to satisfy such needs late in the design cycle, which imposes new or altered requirements and/or constraints, can often compromise how well other requirements are satisfied and thus the success or failure of the endeavor overall.
To further complicate matters, stakeholders will have different and potentially conflicting needs. It is the job of the system designer to sort this out, potentially necessitating a negotiation process to get different parties to compromise on their conflicting needs. It is important to identify the stakeholders, but it is also important to prioritize them and keep them involved.
The stakeholders will obviously be unique for every project, though stakeholders generally span the following functional roles. Depending on the scale of the project and the organization, there may be multiple individuals responsible for a particular role, or conversely, a single individual may be responsible for multiple roles.
System Owner: This is the entity (person or organization) that owns the wireless system, or at least the property in which it will be installed and operated, such as a building owner. For the example of a city-wide surveillance network, this is the municipal government. For a hotel Wi-Fi network, this is the hotel management company. For an IoT deployment, this may be the facilities manager, the site manager, the IT group, the OT group, or a corporate manager. It is important to identify the system owner as they have the authority required to make decisions in most cases. The system owner is responsible for funding the initial and ongoing investment in the system. Accordingly, the needs of this stakeholder are largely driven by budget limitations, aesthetic concerns, and branding/customer-facing portions of the system.
System Operator: This role encompasses the persons or organizations responsible for installing, operating, and maintaining the wireless system. This may be supervised by an IT or OT department of the owner's organization or outsourced to one or multiple external service providers. For large projects, there may be several organizations or departments involved in different phases, including external contractors. In the example of a city-wide surveillance network, the physical installation and physical maintenance may be performed by the Department of Public Works, whereas the day-to-day monitoring and management of the system may be performed by the Police Department. The needs of these stakeholders will generally be operational, focusing on the ease of installation and system turn-up, ease of monitoring and managing the wireless system activity from either a central location or from multiple decentralized locations, ease of maintenance, and system uptime and availability. There may also be constraints imposed on the choice of equipment vendors to simplify integration with existing systems.
Integrated Systems: These are the machines and/or systems with or around which the wireless system needs to work. For co-located wireless systems, this requires frequency coordination to ensure that the wireless systems don't interfere with each other. For surveillance applications, the integrated systems are the cameras, and there will be needs related to the image quality (number of pixels, frame rate, compression, etc.) and the number of cameras at each location, which will ultimately drive the necessary throughput requirements at those locations. For a network of IoT devices, this will drive needs based on the types, quantity, and location of the devices using the wireless network as backhaul infrastructure. Aside from video surveillance cameras, most IoT devices generally do not need much bandwidth, but a high density of devices may influence the number of gateways needed. Furthermore, a Wi-Fi network may need to be established only for the integrated systems; one may not need a Wi-Fi access point in a boiler room for human user access, but the sensors and actuators to regulate the building machinery will certainly require wireless access, possibly with some 802.15.4-based protocol or another protocol. The needs here are primarily going to be centered around functionality, such as throughput, capability to pull data from sensors and/or push data to actuators, availability, security, and privacy.
Clients / Subscribers: These are the people who are using the system, i.e., the people for whom the use case applies. In a building Wi-Fi network, these are the users who are connecting their devices to the Wi-Fi. For city-wide surveillance, this is the public being observed. The needs here will be centered around functionality, including throughput, usability, availability, security, and privacy.
Indirect Organizational Entities: These are the people or organizations that don't have a direct interest in the system yet still have some level of involvement, especially during the procurement and installation phase. Examples of this could include the accounting and marketing departments of the owner, the building manager, etc. These stakeholders are likely to impose project constraints but may also present additional functionality needs (e.g., coverage areas not otherwise identified) or even opportunities (e.g., an unused/underused ring of dark fiber in the building/municipal area left over from a separate project).
Government or Regulatory Authorities: Wireless systems are constrained to meet regulatory constraints on the frequencies used and transmit power levels. There may also be local, state, or Federal laws regarding specific applications, especially related to privacy in video and audio surveillance use cases.
It is usually necessary to interview the stakeholders to solicit their inputs, either in person, by phone, or through email or a web survey. It may be impractical or cost-prohibitive to speak directly to all stakeholders, but in-person or video/voice calls are always better than email or an online survey. Such conversations are often revealing, as it is easier to get a stakeholder to verbalize otherwise unstated, controversial, or "delicate" needs in a conversation than in a written email, even when the person knows the interview is being recorded. One common example of a "delicate" need is whether coverage is necessary in the bathrooms. (Bathrooms are typically high-use areas for smartphones, but rarely does anyone acknowledge this.)
It is important when soliciting needs from the various stakeholders to assess their relative importance and elasticity. Importance is a measure of which functions are absolutely essential for success, versus functions that are "nice to have" if a lot of extra effort or cost is not involved. Different stakeholders are likely to have different perspectives on this. Elasticity is a measure of how much flexibility there is in satisfying particular needs. For example, in some circumstances, the maximum cost budget is fixed, whereas other times the budget is a target value, and there may be some willingness to increase it, especially in return for adding some "nice to have" functionality.
Even if the stakeholders don't get everything they are asking for, it is important for the stakeholders to feel heard and understood. There will be much wider buy-in to the final design solution if everyone has had the opportunity to provide their input and perspective.
Once the needs are captured from the various stakeholders, they need to be sorted into requirements and constraints. The needs identified by the stakeholders will ultimately be used to judge whether the system as implemented is successful. However, needs are usually qualitative items that rarely can be directly designed to.
Recall that requirements dictate what the system must do in order to work properly, while the constraints dictate what the system must work around in order to meet the requirements. In order to properly characterize a stakeholder need as either a requirement or a constraint, define the requirements to be intrinsically independent of each other, so as to capture all of the core functionality needed for performance of what the system has to do, without (yet) considering how those requirements will be satisfied and what may constrain your ability to satisfy them. A use case will generally dictate a list of multiple independent requirements.
For more complex systems, it commonly makes sense to structure requirements hierarchically. This does not violate independence, though it does allow for the requirements to be grouped logically. For example, in a smart building project, air quality requirements could all be in a category labeled Air Quality Requirements with sub-elements labeled CO2 Detection Requirement, Oxygen Level Requirement, Air Circulation Requirement, Air Filtration Requirement, etc.
Unlike requirements, constraints can be highly interdependent with each other, and inevitably impact one or more requirements. Constraints ultimately limit the design options available to be selected and implemented, and thus ultimately drive how well the design can satisfy the requirements.
A system is overconstrained if one or more constraints either directly conflict with each other or conflict with one or more requirements. For example, a system is overconstrained if there is a requirement to provide video surveillance of a particular region, but local government regulation makes such surveillance illegal. A system is also overconstrained if no viable solutions can be devised. For example, the budget on a project is so small and inelastic that no equipment vendor can meet the cost targets while satisfying other performance requirements. It is best to identify such overconstraints as early as possible in the process. Resolving such overconstraints means circling back with the relevant stakeholders to compromise to remove, or at least relax, the affected requirements or constraints, such that a design solution is possible.
Constraints are generally unique to each project but often fall into the categories of either physical or organizational. They may also be categorized as internal or external, with internal constraints being self-imposed and external constraints being other-imposed, such as with government regulations.
Physical constraints are those dictated by the environment. For wireless systems, such constraints generally include the following:
Interior and exterior building materials: When radio frequencies (RF) interact with objects in the environment (e.g. walls, buildings, furniture, people, etc.), they are subject to several different physical effects, generically known as attenuation. Most importantly, RF propagates differently through different building materials, and generally lower frequency signals tend to propagate better through building materials than higher frequency signals, enabling lower frequency signals to transmit farther. Additionally, building materials like metal are far more reflective than wood or drywall, which will influence where the RF signal propagates. The building materials, in combination with the layout of the property, will serve to dictate how many wireless transmitters and receivers are necessary.
Lack of Cable Path Availability Within Buildings: It is much easier and less expensive to pull power and/or Ethernet cabling in new construction before the walls are installed vs. an existing facility, and easier to pull cables in environments with drop ceilings vs. hard ceilings. In existing buildings with hard ceilings, one is often limited to pulling cable down hallways in cable chases (i.e. crown molding at the top of the wall near the ceiling), and sometimes even that isn't possible. If telephone closets aren't stacked on adjacent floors, one may have to core drill through stairwells or other rooms to get cabling where it needs to go. Cable paths, or lack thereof, can significantly limit where wireless transmitters can be installed, and thus the resulting signal coverage. Being limited to hallways can also significantly limit certain applications, such as Wi-Fi and other short-range IoT protocols, due to decreased in-unit/office/room coverage and higher co-channel interference. In extreme cases, the application may not be doable, as the lack of cable path availability means that wireless transmitters cannot be placed where they are needed for an application. For example, indoor positioning via Wi-Fi cannot be performed accurately if APs can only be installed in the hallways since trilateration algorithms would not be able to determine which side of the hallway a signal is coming from. In such an over-constrained case, it may be necessary to relax the positioning accuracy requirement to the general area of a floor, and not a specific room, because of the inability to put wireless sensors in the right locations. It may also be necessary to change the design approach from a Wi-Fi-based system to a BLE-based system using battery-powered sensors that use the Wi-Fi as backhaul, as these would provide positioning accuracy but increase maintenance costs and operations since batteries would need to be periodically changed.
Lack of Cable Path Availability Between Buildings: It is common to need to interconnect multiple buildings for networking or surveillance applications, but it can be cost-prohibitive and/or logistically impractical to rip up roadways and sidewalks to run Ethernet or fiber cable. This is especially true if the buildings are not all on the same private property, such as being across a public street from each other. This type of constraint can usually be worked around by adding in point-to-(multi)point links, though this itself adds expense and channels need to be more carefully managed if co-locating PTP/PTMP links and Wi-Fi APs, as strictly speaking, these are co-located wireless systems serving different functions, even though they may be part of the same overall local area network (LAN).
Overlapping Wireless Systems: This is a frequent constraint in Wi-Fi networks but is applicable to multiple wireless systems on the same frequency band that overlap with each other. It is important to distinguish between co-located systems and neighboring systems, as well as who owns and controls the overlapping wireless systems. Co-located wireless systems are two systems that intentionally cover the same area. A common example would be a Wi-Fi network for client access and a Zigbee network for IoT or Bluetooth beacons for indoor positioning or any of the other common 802.15.4 protocols that run in 2.4 GHz. For co-located wireless systems, it is best to try to coordinate the channels used (e.g., limiting Zigbee to specific channels at the edge of the Wi-Fi range). Such frequency coordination can be done if the system owner actually owns all of the co-located systems, as this gives the system designer complete control over all of the operating channels. However, if a third party (non-stakeholder) owns and controls that co-located system, that can be devastating to satisfying your key system requirements. Neighboring systems are intended to cover adjacent physical areas, but due to the physics of RF propagation, there is usually some unintentional signal overlap. A very common example of this is two neighboring Wi-Fi systems in adjacent apartments/office suites/buildings, and usually these are owned and controlled by separate entities. In this case, there isn't much that can be done - one can try to do frequency coordination, but with unlicensed spectrum there is no legal recourse if both parties are following the regulatory rules and neither party is intentionally jamming the other. Auto-channel is usually recommended for such scenarios, but auto-channel has its own limitations and can compromise the quality of the rest of your own wireless network. Sometimes, you need to design your network to your own needs and hope your neighbors adapt to you, as opposed to you adapting to your neighbors.
These are constraints dictated by the customer or other stakeholders, based on their organizational needs. Common types of organizational constraints are as follows:
Budget: All projects have some constraint on the amount of money that can be spent, though the level of elasticity in the budget can vary widely by vertical market, by customer, and even for specific projects. Generally, the lower the overall budget available and the more inflexible the elasticity of that budget, the more restricted the potential design options and the more likely that some requirements will not be adequately satisfied. Unrealistic expectations on budgets will overconstrain a design, and particular requirements or other constraints may need to be relaxed or eliminated if the budget cannot be compromised upon. Note that many customers will have specific constraints on budget related to capital expenditures (CAPEX) vs. operational expenditures (OPEX). Some organizations will have funds allocated for a particular project for the initial install (high CAPEX) but will not want to make substantial changes to their ongoing operational budget (low OPEX). Such projects generally need equipment that doesn't require ongoing licensing or support contracts during operation or at least will need to pay for such fees in advance. The analogy here is purchasing a new car with all cash up front (i.e., no financing). Other organizations may be structured where they cannot provide all of the funds for equipment up front (low CAPEX) but would prefer to spread out their costs over time (high OPEX). In these situations, the equipment is often leased to the customer, either permanently (e.g., an automobile lease) or for a certain time frame (e.g. an automobile purchase with a car loan from the dealer paid off over a series of years).
Schedule: Like money, there is usually never enough time, though schedules can also vary widely in their elasticity. When deploying a wireless system for a particular event, such as temporary Wi-Fi for a concert, then the schedule absolutely cannot be compromised on by definition. The schedule can emerge as a devastating constraint late in a project. For example, in new construction, a wireless system generally cannot be installed until the building is nearly complete, both for equipment security as well as practical limitations; if you are installing Bluetooth beacons on walls, the walls have to be physically in place before the beacons are installed. Weeks or even months of construction delays are all-too-common, which can significantly compress the installation time allotted to a wireless system in a rush to get the building open to tenants on time. This can lead to installation shortcuts (e.g., skipping Ethernet cable testing, not properly verifying wireless system configuration settings, etc.) which can easily lead to degraded system performance.
Aesthetics: Except for hardcore wireless engineers, nobody likes to see antennas. Some high-end hospitality or apartment complexes, as well as historical museums and landmarks, don't even want to see access points or IoT appliances at all and will insist that they are placed in the walls or ceilings. In addition to making installation and serviceability more difficult, this can impede functionality as additional structure now exists between the APs and the client devices that was not accounted for in the design. There are some creative ways around aesthetics issues, such as painting or even skinning access points to make them virtually invisible. Projects with significant aesthetics constraints are usually willing to spend additional funds to meet such constraints. Conversely, lower-end residential and commercial environments, as well as most industrial environments, are likely not to impose any aesthetic constraints.
Equipment Security: This is a limitation on where wireless system components need to be physically located to protect the hardware assets themselves. On some properties, potential theft of equipment can be a problem, so the equipment needs to be either locked down, hidden, or located out of reach. Damage protection may also be a significant issue, either accidental (e.g., a stray basketball in a gymnasium) or intentional (e.g., concerns about people shooting at equipment). Equipment placed in harsh environments (e.g., extreme heat or cold, saltwater beaches, dirty industrial facilities, etc.) may also require protection from the environment, as well as temperature regulation from fans, heaters, or coolers. Some vendors have equipment in hardened enclosures, or equipment can be placed in locked and/or hardened cases, though usually this incurs additional expense and installation effort. Even for properties where physical equipment security is not a significant concern, it is always a good idea for the supporting equipment in racks in the main distribution frame and intermediate distribution frames (e.g., switches, routers, servers, etc.) to be placed in locked rooms or rack cabinets with limited access, so as to minimize the risk of a system outage.
Use of Particular Vendors: Most equipment vendors want to enforce brand loyalty, and do this by making their product offerings sticky, to make it more challenging to switch to a competitor. In wireless applications, this is often done with a combination of sales tools such as cloud-based AP controllers, training, volume purchasing incentives, licensing, and, most importantly, interpersonal relationships. If a service provider is heavily invested in a particular vendor's technology and has already deployed it elsewhere for several other projects, they are generally not going to switch to a new vendor. Sometimes, the customer will be the one to dictate the allowed systems to be used; it is extremely common in hospitality Wi-Fi for hotel brands to dictate the particular AP vendors and models that their franchisees can deploy for "brand consistency." Alas, not all properties and projects are created equal, and different equipment vendors tend to have different niche specialties, so a vendor that works well in one type of vertical or environment may not be the optimal choice for others with respect to performance and/or price.
Accessibility: Many properties may have restricted or secure areas that are only accessible at certain times, with advanced notice, and/or with an escort. Such constraints generally complicate installation and servicing, which can put pressure on scheduling constraints and/or availability and uptime requirements. That said, be careful to avoid the temptation to have an inconvenience justify a compromise to core functionality and performance. In hospitality and other multi-dwelling unit Wi-Fi deployments, accessibility constraints are usually cited as justification for putting access points in hallways instead of in the units, even though hallway deployments generally compromise the coverage and co-channel interference of the Wi-Fi, and thus its ultimate performance. Most such properties would consider overall tenant satisfaction with the Wi-Fi network far more important than a temporary localized outage issue because they need to give a tenant advanced notice to enter their unit to conduct a repair. Even the tenant would most likely welcome the technician in right away to fix the Wi-Fi. If this is properly explained up front, the system owner and the system users are quite likely to relax the accessibility constraints and/or the uptime requirements.
Unique Property Constraints: Some specific verticals or geographic locations may impose unique constraints. Unique property constraints are generally manageable but can require some out-of-the-box creativeness or additional design effort to create a customized solution vs. using off-the-shelf components. As an illustrative example, deploying a wireless system in a hazardous waste processing facility can be especially challenging, since all electronics need to be in explosion-proof enclosures to protect the facility itself. This constraint goes well beyond normal outdoor ingress protection ratings and typically necessitates that all active electronic components are sealed in thick steel boxes, which makes it hard to propagate wireless signals. Such a constraint may necessitate a custom design where external antennas are mounted to the outside of the box with appropriately sealed antenna connectors and perhaps specialized shielded and sealed cabling.
Inevitably, most projects suffer from some level of scope creep, where the requirements and/or constraints are changed at some point during, or even after the completion of, the project. Scope creep usually occurs because the full set of stakeholder needs were not properly identified and captured up front. However, sometimes needs change: budgets are cut, certain hardware isn't available or perform as expected requiring some additional workaround elsewhere, or project scopes are enhanced, such as adding more cameras to a wireless surveillance network, necessitating more bandwidth. For example, in surveillance projects, additional cameras are almost always introduced late in the design cycle, sometimes even after the wireless backhaul network is fully deployed and operational. Unidentified areas of coverage are easily missed in the early stages or once implemented, stakeholders surface that want coverage not previously communicated. It's part of the process.
Scope creep will occur, and therefore it needs to be managed in the design process. There are three basic techniques to design for scope creep, namely (1) include excess margin, (2) minimize complexity, and (3) maintain functional independence. Note that all these techniques may not be possible or only of limited applicability due to specific requirements and constraints of a project.
Design Margin: Never design to the limits of your hardware. In a wireless link, it is usually best practice to design for only 60% - 70% of the rated capacity. Links may not be perfectly aligned, may be subject to external interference, and may be forced to carry additional data load, for example because of more client devices than specified. There may also be surprises, such as building materials in the environment not being what was expected. Designing with margin will inevitably lead to additional access points, but this is well worth doing up front if there is elasticity in the budget. Similarly, on the wired side of the network, never consume every port in a switch, but always leave about 20-25% of your ports unused - this allows for adding devices later, accommodating bad switch ports, etc. Note that there is a risk of over-designing the system in pursuit of design margin, which should be avoided. From a capacity standpoint, this may mean designing for excess capacity that the system will never use, which naturally drives up both cost and system complexity. There is always a fine line to balance between designing with sufficient margin vs over-designing the system.
Minimize Complexity: While harder problems will naturally necessitate more complex solutions, it is generally advisable to minimize complexity wherever possible. The greater the complexity, the harder it is to install, operate, and maintain, and the less robust it is to changes when (not if) the scope creeps. Many vendors are beginning to offer solutions leveraging machine learning and artificial intelligence to attempt to manage the complexity of modern wireless deployments. While these algorithms are necessary in certain specific use cases and can provide enhancements for both operations and diagnostics, they add their own additional layer of complexity to the system and may lead to unexpected behavior.
Minimizing complexity usually can be achieved with the following techniques:
Minimize the number of components: In general, the fewer pieces of hardware you have, the easier a system is to install, operate, and maintain. Note, this may need to be balanced against the need to have additional components for margin as well as redundant equipment if high availability / short mean-time-to-repair (MTTR) is a critical requirement.
Standardize the design: While a wireless system is always customized to its environment, complexity can be significantly reduced if standard components (e.g., APs, switches, routers, sensors, etc.) are used and standard configurations are implemented, such as a standardized IP address and VLAN scheme used across multiple projects, even if not every VLAN / IP range is used on specific projects. The more that multiple similar projects are configured to look like each other, the simpler a deployment will be to implement and the easier it will be to troubleshoot. Note that some projects may impose constraints on using particular equipment vendors and/or conforming to a particular IP addressing and VLAN scheme, in which case the design must conform to those constraints.
Avoid excess features: In general, higher-end enterprise hardware for wired and wireless network components (e.g. APs, switches, routers, etc.) have additional features and subsystems that may not be used on your projects. Examples of these from Wi-Fi networking include wireless intrusion detection and/or prevention systems (WIDS/WIPS), layer 7 firewalls for stateful packet inspection, integrated antenna beam steering, artificial intelligence (AI) / machine learning (ML), etc. These features add additional complexity in both initial setup and troubleshooting, especially if they are enabled when they are not intended to be. Additionally, the cost of these features is embedded in the price of the hardware, whether you use them or not. Such features are available for particular use cases and applications; if such hardware and/or software features are applied to satisfy particular requirements or constraints, by all means deploy them. If not, however, consider going with simpler systems that don't provide those features to save on both cost and complexity.
Require Installers and Maintainers to Produce Professional Installations: There are several best practices and guidelines on wiring techniques to keep the main distribution frame (MDF) and intermediate distribution frames (IDFs) neat and tidy. This includes using appropriate lengths of patch cables (i.e., so long cables do not clutter the installation), using patch panels and patch cords instead of directly running the wired feeds into a network switch, color coding patch cables by application (voice, video, data, etc.), bundling cables in cable trays, using zip ties or velcro ties to bundle patch cables, etc. This adds some extra time, effort, and thus cost to the original installation, but will save significant effort and money later on during troubleshooting, as well as avoiding mistakes during system maintenance or when (not if) scope creep requires adding additional components to the network.
Maximize Functional Independence: The requirements of the system are defined independently from each other. The design parameters selected to satisfy the requirements and constraints will dictate whether or not that independence is maintained. The more that independence is maintained in the design, the easier it becomes to accommodate scope creep, as the change in a particular functional requirement will only impact one aspect of the design, and not ripple into the design of how other requirements are satisfied.
Once the requirements and constraints are fully captured, then design parameters can be generated and evaluated. The specific design parameters need to be matched to the specific requirements and constraints. Design parameters dictate how the requirements are going to be satisfied. There are always choices to be made in satisfying the functional requirements, even in the presence of constraints. As stated above, there is no one "right" answer, but there will be several better and worse alternatives, so a systematic method is necessary in order to evaluate different design alternatives and select the best options of all the available choices. Several methodologies have been proposed that vary in both structure and approach, but all such methods generally serve to maximize independence and minimize complexity. One such method, known as axiomatic design, is presented below as it is simple to understand and apply effectively in the design of wireless systems.
Functional requirements (FRs) are independent of each other, by definition. Ideally, each functional requirement should have one, and only one, corresponding design parameter (DP), and that DP should only influence its corresponding FR. This ideal case is known as an uncoupled design. As design requirements get more intricate and more constrained, this is usually difficult, if not impossible, to achieve in practice. A coupled design is the case where all of the DPs impact all of the FRs. This is the situation that needs to be avoided, as the change to any single FR or constraint (e.g., from scope creep), and thus to its corresponding DP, impacts all of the other FRs. The change to the DP requires additional changes to other DPs to compensate, which then ripple back to the original FR. Such a design requires iteration and thus becomes very difficult to optimize. More importantly, the design is very fragile, and will, therefore, have difficulty accommodating even minor changes in the requirements.
Fortunately, appropriate choices of DP can limit the amount of coupling, and it is frequently possible to select and limit DPs in scope to provide a decoupled design so that the DPs can be changed in a particular sequence to provide the FRs without requiring further iteration.
In an ideal design, the number of FRs equals the number of DPs. An insufficient design exists when there are more FRs than DPs, as it is impossible to independently satisfy all the FRs because there is an insufficient number of DPs to do so. Conversely, a redundant design exists when there are more DPs than FRs, which is known as redundant. In this case, one or more of the additional DPs can generally be held fixed or tweaked within minor ranges, to allow independent or sequential manipulation of the other DPs. (The Principles of Design, N. P. Suh, 1990)
A simple illustration of these principles is demonstrated by the design of a water faucet. The two fundamental FRs for a water faucet are as follows:
Water is generally supplied to a faucet via two pipes, one supplying hot water and the other supplying cold water. Accordingly, the seemingly simplest solution, as shown in Figure 3.1, is to have two faucet valves, one to control the volume of hot water flow and one to control the volume of cold-water flow. The hot water and cold-water valves are therefore the two DPs. However, anyone who has ever used such a faucet knows intrinsically that this is a coupled design - you cannot adjust either valve by itself without affecting both water flow rate and water temperature. Getting the desired temperature and flow rate, therefore, requires iterating the position of both faucets.
Contrast this with a faucet with a mixer tap valve, as shown in Figure 3.2, such that moving the lever vertically controls the flow rate by drawing in water from both pipes equally, while moving the lever horizontally controls the temperature by deliberately changing the valve size (i.e., inlet areas) of each pipe, thus altering the ratio of hot water to cold water. Both parameters can now be controlled independently. A mixer tap valve generally assumes that the pressure and size of the hot and cold water feeds, and thus their volumetric flow rates, are equivalent. What would happen, however, if one of the feeds had a disproportionately higher flow rate, such as a system constraint where the cold-water pipe provides cold water at a higher pressure, and thus a higher flow rate than the hot water pipe?
In this scenario, adjusting the vertical lever of the mixer tap valve impacts both the flow rate and the temperature, as the volumetric flow rates from each supply pipe are not equal. However, adjusting the horizontal lever of the mixer tap valve still only impacts the temperature (i.e., the ratio of hot to cold water) and not the flow rate. By adjusting these two parameters in sequence, i.e., first the vertical lever for flow rate and then the horizontal lever for temperature, both parameters can be adjusted without further iteration.
This technique can readily be extended to larger systems, such as the high-level functional requirements that apply to a typical wireless network, whether it is Wi-Fi, Bluetooth, Zigbee, Z-Wave, or even cellular. The typical functional requirements for a wireless network are as follows:
FR1: Connect human and/or machine client devices and applications to a wireless network. This requirement dictates understanding the intended use case(s), so it encompasses what types of client devices will connect to the network and what types of applications those devices will be running. For a Wi-Fi network, this may consist of smartphones, tablets, and laptops in an environment doing email, web browsing, video streaming, etc. For a wireless in-building positioning network, this could consist of Bluetooth beacons sending signals to smartphone apps. This requirement also encompasses how clients will be authorized and if multiple types of client devices need to access the same network, such as guests connecting to an open Wi-Fi network via a captive portal, IoT devices and/or cameras connecting to a secured network using WPA2 Personal, and staff devices connecting to the network with WPA2 Enterprise. This requirement will dictate the number of SSIDs that need to be broadcast, as well as the security, access control, and bandwidth requirements of each SSID. The bandwidth required per device type is also included here.
FR2: Provide adequate wireless signal coverage to all areas of the facility. This requirement dictates understanding where client devices will be accessing the wireless network in the facility and the appropriate signal strengths to ensure at least minimal acceptable levels of performance.
FR3: Provide adequate capacity. As more and more devices connect to wireless networks, this requirement reflects the need to provide enough capacity to handle all the simultaneous devices connected to the network. This will be satisfied both in terms of the technology of the APs (e.g., for Wi-Fi this would consist of the choice of 802.11n vs. 802.11ac vs. 802.11ax) and the quantity of APs, as more APs are required to handle high client density environments beyond simple signal coverage. Note that this is intentionally separated from the client devices and applications; if three SSIDs are required on a Wi-Fi network because of the different client device types and applications, those three SSIDs are still required independently of whether there are 30 client devices, 300 client devices, 3,000 client devices, or 30,000 client devices connecting to the network simultaneously. Naturally, different SSIDs may require accommodating different quantities of client devices.
FR4: Manage the network. This requirement dictates the need to monitor and maintain the network, and potentially to make changes to the network over time. This may dictate the need for a network management system, a vendor's cloud controller, a custom framework integrating APIs from multiple sources to get disparate systems to intercommunicate properly, or simply configuring everything in standalone mode and only looking at it when something breaks. This mechanism will differ depending on whether it is an internal IT team vs. an external vendor maintaining the network, whether multiple vendor systems need to be integrated, etc.
FR5: Integrate with the backhaul infrastructure. There are no wireless networks without wires, and the quality of the wireless network is only as good as the quality of the wired network infrastructure that it relies upon. Thus, this requirement encompasses the need for all the cabling infrastructure, wireless PTP/PTMP backhaul links, network switches, and routers necessary to establish communication both through and outside the network. This requirement may also require integration with a data analytics engine to capture data and process it to accommodate the use case, such as for a wireless in-building location network.
This technique can be readily extended to larger systems, such as the high-level functional requirements that apply to a typical wireless network, whether it is Wi-Fi, Bluetooth, Zigbee, Z-Wave, or even cellular. The typical functional requirements for a wireless network are as follows:
Note that these requirements have been defined independently of each other. The design parameters selected may (and usually will) break that independence. Design coupling occurs when a particular design parameter influences multiple requirements.
The degree to which the design parameters allow you to satisfy these requirements independently will ultimately dictate how well one can accommodate changes to particular requirements (i.e., scope creep) without sabotaging the overall functionality of the system.
In terms of constraints, budget is extremely common, though other constraints such as aesthetics, cable paths, co-located RF networks, etc., need to be considered.
The corresponding design parameters for a generic Wi-Fi network are as follows:
DP1: Access Point Model(s).
This dictates the choice of a particular AP vendor and the model(s) of access point, which determine the technological capabilities needed to satisfy client device types and applications (FR1) as well as environmental and mounting needs. Depending on the coverage area requirements (FR2), multiple compatible models may be selected, such as indoor access points vs. outdoor access points, or models with external antenna ports to accommodate directional antennas. If there is a constraint to using a particular vendor, either due to budgetary constraints or constraints related to existing knowledge and remote management infrastructure, the available model choices and features may be limited.
DP2: Access Point Locations.
This dictates where the APs are placed throughout the facility to satisfy both coverage (FR2) and capacity (FR3) requirements. Constraints on aesthetics or where cables can physically be run may limit where the APs are located and ultimately affect the quality of coverage in specific areas.
DP3: Access Point Channels.
This dictates the channel settings on each radio frequency band that the AP operates on. APs on the same or overlapping channels in neighboring areas can result in interference, which significantly impacts throughput performance (FR1) and capacity (FR3). Frequency coordination is essential, especially when accommodating other wireless systems in the environment on the same radio frequencies. For example, if a Zigbee system for IoT applications is co-located, channel limitations must be managed to prevent interference.
DP4: Access Point Transmit Power.
This dictates the transmit power settings on each radio frequency band. The transmit power affects the coverage area (FR2). In high-density environments (FR3), transmit power may need to be lowered to create smaller coverage cells and accommodate more APs within a specific area, minimizing self-interference and improving capacity.
DP5: Network Management System.
This dictates the system used to monitor and manage the network. Many AP vendors have cloud-based or on-premise controllers for remote monitoring and configuration. If the management system is already chosen (either by internal IT teams or a third-party vendor), the choice of AP vendor and model may be constrained to ensure integration with the management system.
DP6: Wired Network Infrastructure.
This dictates the backbone infrastructure that supports the wireless network and provides backhaul from APs to the Internet, servers, or other endpoints. This includes cabling, network switches, routers, and wireless PTP/PTMP bridge links. Budgetary and physical cabling constraints may limit the effectiveness of the wired infrastructure.
This example shows the design coupling between functional requirements (FRs) and design parameters (DPs). The design parameters will often influence multiple requirements, and balancing these elements is crucial to meeting the system's needs while navigating constraints effectively.
There are certain choices we can make in the design parameters above to ensure that we result in a decoupled design. Often, these choices are characterized by vendors and other engineers as best practices in deployments. Best practices are usually techniques that are empirically learned over time and recommended because they seem to work in many applications. In actuality, best practices work because they serve to maximize the independence (i.e., reduce the overall coupling) in a design, making deployments simpler and easier to manage.
In Wi-Fi, it's considered best practice to put the APs in rooms instead of hallways, and to stagger their position on neighboring floors. Why? We know that the client devices (FR1) tend to have significantly weaker transmitters than the access points (DP1). Accordingly, the AP is generally shouting whereas the client device is whispering. Optimal performance is therefore achievable by placing the APs (DP2) as close as possible to the clients with the minimum number of physical obstructions (e.g., walls).
With respect to other APs, we want to discourage overlapping communications due to channel conflicts (DP3), so the APs should be placed as far apart as possible with as many intermediate obstructions as possible. In practical terms, this means staggering the position of APs both horizontally and vertically. APs should be placed in rooms on alternating sides of the hallway and staggered from floor to floor. By positioning APs so that they have minimal AP-to-AP interference, the impact of the AP position (DP2) on ultimate performance is minimized if not eliminated.
For channel settings (DP3), a static staggered channel pattern on both the 2.4 GHz band and 5 GHz band is recommended, based on the AP positions (DP2). Additionally, unless throughput requirements (FR1) dictate otherwise, it is generally best to use the smallest channel sizes available so as to maximize the number of independent channels. This allows the largest amount of space and the largest amount of internal building structure between APs that repeat the same channel. For high-density deployments (FR3) with dual-band client devices (FR1), it may be necessary to disable 2.4 GHz radios to minimize co-channel interference, such that high-density capacity is handled on the 5 GHz band and the 2.4 GHz band is primarily used for lower-capacity coverage.
The transmit power settings (DP4) impact the effective range at which an AP can be heard by a client device (though not the range at which a client device can be heard by the AP). Furthermore, the laws of physics (effectively a constraint) dictate that 5 GHz travels less far than 2.4 GHz, especially through walls and other structure. To simplify AP location (DP2) and channelization patterns (DP3), it is common best practice to set a fixed transmit power on all APs, which makes the coverage area roughly the same and allows the APs to be located (DP2) in an evenly spaced manner, which also allows the channel pattern (DP3) to be simpler to formulate. Furthermore, a fixed offset of 6-9 dB between the 2.4 GHz and 5 GHz bands ensures a roughly equivalent coverage area (DP2) on both bands. If particular areas require more or less coverage, due to building layout and structure, transmit power can be tweaked on individual APs.
This is illustrated in the following example of a Wi-Fi design for a multi-level hotel. In Figure 3.3, the APs are located in the hallways, and while staggered in position from floor to floor, they still result in coverage problems in particular guest rooms (FR2), as well as high co-channel interference between APs, despite having proper channel (DP3) and transmission power (DP4) settings. By contrast, Figure 3.4 shows the result of placing the same number of APs in alternating rooms on each side of the hallway. Coverage in the guest rooms (FR2) is dramatically improved, and co-channel interference between APs, while not eliminated because of the limited channel choices on the 2.4 GHz band, is drastically reduced (DP3).
Thus, we can decouple our design matrix by using the following best practices in sequence:
Set a fixed transmit power (DP4) on all the APs on both the 2.4 GHz and 5 GHz band to ensure an equal coverage area (DP2) from all APs on all bands. This resolves the redundant design problem by selecting one DP to have a fixed value.
Select the APs (DP1) for the application. If using APs with internal antennas, which is common to meet basic aesthetics constraints, the antenna pattern and gain is fixed based on the AP vendor and model. If using APs with external antenna ports, the antennas must be selected at this step.
Select the AP locations (DP2). This can be done starting with predictive modeling as shown in the figures above, and/or reinforced with passive site surveys to understand how the AP model selected will propagate through the walls in the environment. Placing the APs within rooms instead of hallways also simplifies channelization (DP3) and minimizes the potential for co-channel interference.
Select the AP channels (DP3). The antenna (DP1), locations (DP2), and transmit power (DP4) are now established, so a channelization pattern can now be created to minimize the potential for co-channel interference.
Select the AP controller (DP5). The AP vendor and model selected will often dictate what control options are available. Some vendors may offer multiple methods of control (e.g., standalone, local controller, cloud controller), which can be selected based on the use case (FR1), the capacity (FR3), and the monitoring and management needs (FR4).
Establish the wireless infrastructure (DP6). The locations of necessary MDF and IDF closets, horizontal cable runs to APs (DP2), any wireless point-to-point/point-to-multipoint wireless connections, and other infrastructure needs get established here. Some AP vendors may also allow for control of switches and routers, which may dictate specific hardware choices to be compatible with the monitoring and management requirements (FR4).
By following these best practices, in sequence, the design can now be decoupled, as shown below:
This same process can be used to evaluate alternatives in a systematic way. For example, in the case of selecting AP locations (DP2), placing the APs in the hallways will create much self-interference between neighboring APs, which will degrade the ability to satisfy the use case (FR1). This clearly creates a coupling term between the AP locations (DP2) and the use case (FR1), which we want to avoid. Putting the APs in the rooms minimizes or eliminates the co-channel interference, and thus removes (or at least minimizes) that coupling term. Similarly, selecting channels poorly, or allowing an AP with an insufficient algorithm to do it for you (DP3), will potentially create self-interference and thus impose a coupling term between channel setting (DP3) and the use case (FR1).
Thus, when evaluating design alternatives, the CWISA needs to evaluate all the requirements in sequence and understand what the potential impact of that design choice is on the other requirements. The larger the impact on the other requirements, the more coupled and thus more fragile the design.
When designing a wireless system, it is essential to fully understand your requirements and constraints before diving into the generation of a system design. This means understanding the use case and engaging with all stakeholders to collect their needs. Once the needs are collected, they need to be sorted into requirements (i.e., what the system needs to do) and constraints (i.e., what the system needs to work around). After the requirements and constraints are quantified, each design parameter can be selected and evaluated for its ability to satisfy its own intended requirement and the constraints while minimizing its impact on the other requirements. Going through this process systematically will provide a design that is more robust to scope creep, as it will more readily accommodate changes to requirements and constraints as the project progresses.
In today's connected world, there is a multitude of wireless technologies that connect us and our systems. Think about all the "wireless" systems that you use daily: Wi-Fi, Bluetooth, cellular networks, GPS, perhaps AM radio if you want to listen to some sports updates or political commentary. Also, let's not forget about television, whether it's being delivered via satellite or terrestrial (ground-based) antennas. You might even have a set of handheld radios that you use for work or recreation.
All these systems serve wildly different purposes, including pinpointing your location on the globe, browsing online auctions, or just simple voice communication with a friend or coworker who is a couple of kilometers away. So, we've only named a handful of systems with which everyday people interact. What about all the radio systems that are used by government, military, emergency responders, corporations, the aerospace industry, and the other countless verticals that use wireless technologies? The amount of radio systems in use today is staggering - the list provided above puts but the tiniest, most inconspicuous scratch on what humans do with radio technology daily.
Hopefully, we've made our point: there's a lot of wireless stuff out there, and although they're all completely different types of systems, they all have something in common: they all utilize radio waves to transmit information. At their most basic form, all the systems that we mentioned above including Wi-Fi, Bluetooth, GPS, AM and FM radio, broadcast and satellite television, and even handheld radios all use the same basic concepts to wirelessly move information. As a result, all the most fundamental concepts of how they work are the same. Don't misunderstand us: each technology has a tremendous amount of complexity in the layers above the physical aspects of radio waves, but they are the same in their most basic form.
All radio systems are subjected to the same physical limitations and share behaviors when interacting with objects in the physical realm. That said, there are many variables such as frequency, which wildly changes the behavior of radio waves on a sliding scale. If this all sounds complicated, it is, but don't worry. The purpose of this chapter is to lay the groundwork of how radio waves work, define important terminology around radio communications, and cover some of the different methods used to move information via radio.
Later in this chapter, we'll cover some of the modulation schemes that wireless technologies use to convey information, such as FM, Amplitude Shift Keying, and frequency hopping. To fully understand how data is modulated, it's important to understand what a radio frequency (RF) wave is, and what its characteristics and variables are. You don't need to know everything about the physics behind electromagnetic waves, but this guide will serve as a starting point.
Waves
The first thing we must define is a wave. A wave, in the realm of physics, can be defined as a motion traveling through matter or space. Note that the wave is not necessarily a movement of matter, but it is a motion—such as oscillation—traveling through matter or space (non-matter). To visualize this, think of the waves in the ocean bobbing up and down. Now imagine a beach ball placed on top of the waves: as the waves pass by, the ball moves up and down (vertically), but the ball won't move with the waves (horizontally). If you investigate even more closely, you'll notice that the water doesn't travel with the waves, either. Instead, the waves are passing through the water.
Similarly, an electromagnetic wave is an oscillation traveling through space. A specific form of electromagnetic waves (wavelength and frequency, which we'll get to shortly) is used for radio communications and is known as a radio frequency wave, with radio frequency often being shortened to RF. RF systems rely on the phenomenon of electromagnetic waves to wirelessly transmit information.
Waves of all types are often represented using a sine wave, which represents the complete cycle of a radio wave: starting at zero, moving to a positive, going back to zero, and down to a negative, and then back to zero again where the whole cycle will repeat as seen in Figure 4.1.
Frequency
Frequency refers to the number of wave cycles that occur in a given window of time.
Measured in one-second intervals, a frequency of 1 kilohertz (kHz) would represent 1,000 cycles of the wave in one second. To remember this, keep in mind that a wave cycles frequently, and how frequently it cycles determines its frequency.
Table 4.1 shows the relationship between hertz, kilohertz, megahertz, and gigahertz.
Unit | Description |
---|---|
1 Hertz (Hz) | 1 cycle per second |
1 Kilohertz (kHz) | 1,000 cycles per second |
1 Megahertz (MHz) | 1,000,000 cycles per second (one million) |
1 Gigahertz (GHz) | 1,000,000,000 cycles per second (one billion) |
Because all electromagnetic waves, including radio waves, move at the speed of light, the frequency is related to the wavelength. We observe that wavelength and frequency are interdependent. Higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths.An AM radio station at 670 MHz has a lower frequency than an AM station at 1400 MHz. The station at 670 MHz is a lower frequency, and thus longer wavelengths. Figure 4.2 shows two sine waves at differing frequencies.
The concept of frequency exists not only in RF engineering, but sound engineering as well. Figure 4.3 shows a piano keyboard and the sound frequencies to which the keys are tuned. While radio waves and sound waves are not the same phenomena, they do share characteristics such as amplitude, frequency, and wavelength (more on these terms shortly). Because of the similarities between sound waves and radio waves, sound waves make a great starting point for understanding radio waves. Looking at the piano keyboard again in Figure 4.3, you can see that the keys to the left produce sound waves at a frequency as low as 27 Hz. A "middle C" key plays a frequency of 261 Hz, and the key to the right produces a frequency of 3516 Hz — a much higher frequency than the 27 Hz we started at! Remember that these aren't on the electromagnetic spectrum, instead being vibrating air in the audio spectrum, but the audio spectrum still creates sine waves in varying frequencies, so the piano works well for illustrating how radio waves can be transmitted and received at differing frequencies.
Wavelength
The wavelength of a radio frequency (RF) wave is calculated as the distance between two adjacent identical points on the wave. Figure 4.4 shows a standard sine wave. Note that Point A and Point B mark two identical points on the wave; the distance between them is defined as the wavelength. Notice that you can mark any two identical, recurring points on the wave, but the wavelength is frequently measured as the distance from one crest of the wave to the next. Wavelength is a very important factor in wireless communications, as it dictates optimum antenna lengths for specific frequencies and determines how the RF wave will interact with the environment that it is in. For example, an RF wave is more likely to reflect when it strikes an object that is larger than the wavelength, and it will be more likely to scatter if the object is smaller than the wavelength. We will discuss reflections and scattering more later in this chapter. The wavelength at any given frequency is related to the speed of light. If you know the frequency, you can calculate the wavelength. Inversely, if you know the wavelength, you can calculate the frequency, since the speed of the wave is constant, being roughly the speed of light.
When the frequency is known, you can calculate the wavelength in meters, where λ (lambda) is the wavelength, and f is the frequency in hertz:
λ = 299,792,458 / f
Therefore, 2.45 Gliz (converted into Hz) would have a wavelength that is calculated with the following formula:
λ = 299,792,458 / 2.450.000.000 = 0.123
The result is 0.123 meters, which is approximately 12.3 centimeters, or 4.8 inches. So we know that a 2.45 GHz radio wave has a wavelength of 4.8 inches.
Alternatively, if you know the wavelength, you can calculate the frequency. Here is the formal equation for reference:
λ = c / f
Amplitude
To describe amplitude, let's continue to lean on examples in the realm of sound. You've probably interacted with a sound amplifier of some kind, whether an amplifier in a car audio system, an amplifier for music equipment (like a guitar amp), or even an amplifier for a vinyl record player. Put simply, an audio amplifier takes an audio signal and makes it louder. In other words, the audio signal is amplified. You could also say that the loudness or amplitude of the audio signal is increased.
With both audio and radio frequency, amplitude defines how loud a sound is, or how strong a radio frequency signal is. A signal with low amplitude is weak, and a signal with high amplitude is strong. On a sine wave, amplitude is represented with the height of the wave (and this is true for both sound and radio frequency). Figure 4.5 shows two sine waves, one with low amplitude, and one with high amplitude.
Coming back to the sound analogy: higher amplitude (louder) sounds can be heard from much further away than lower amplitude (quieter) sounds. When trying to eavesdrop on a conversation that is far away, you'll notice that it can be very difficult to differentiate between the conversation (signal) and the background noise. The same is true for radio frequency communications: a radio receiver will have an easier time understanding high-amplitude signals than a low-amplitude or quieter signal. At some point, the signal will become lost in the noise. We'll discuss this concept later in this chapter when we discuss RF Noise and Noise Floors.
Phase
Unlike frequency, wavelength, and amplitude, the phase is not a characteristic of a single RF wave but is instead a comparison between two RF waves. Think back to our first discussions about sine waves, where we noted that a single wavelength could start at zero, transition to full positive, go back to zero, transition to full negative, and go back to zero again. This full cycle of a wave can be mapped to degrees on a circle, such as 0°, 180°, 270°, and 360° as the cycle completes. Think of the circle as representing a sine wave that is standing still, and not propagating through space. You can see both the circular and wave representations of this in Figure 4.6.
Now that you understand the different phases of a wavelength let's discuss how two sine waves can be compared. If two radio waves arrive at a receiver perfectly aligned with each other, then they're said to be in-phase, since their phases match. In Figure 4.7, you can see two sine waves that are in-phase with each other. When two radio waves arrive in-phase at the receiver, they will have the effect of combining and increasing the received amplitude of the signal.
Next, you can see another radio wave that is shifted ¼ out-of-phase with the initial phase. This wave is 90 degrees out-of-phase with the initial phase. Finally, you can see the sine wave that is 180 degrees out-of-phase with the original. This will have an especially destructive effect on the incoming signal, as when the two sine waves are received 180 degrees out-of-phase at the receiver, they will cancel each other out, and the receiver won't be able to discern a sine pattern so that no signal will be derived from the carrier wave.
Traditionally, out-of-phase signals were an especially destructive activity for wireless communications. Two signals arriving at 180 degrees out-of-phase were especially harmful since the signal would effectively cancel itself out. However, relatively recent developments in wireless technology now harness phases to increase the amount of data transmission that can be performed. We'll discuss this later in this chapter, specifically in the section about Phase Shift Keying.
As RF waves propagate through space, they will encounter a variety of different solids, gases, liquids, and also space with none of the above. All these physical objects (or in the case of space, non-objects) will affect RF waves in different ways, in the same way that light would react. In fact, much as we leaned on examples from the world of sound, we can now rely heavily on behaviors that we see in light to understand how RF is affected by the environment that it is propagating in.
Amplification
Before we look at how physical objects can interact with and affect RF, it's important for us to have a quick discussion about amplification. The analogies about visible spectrum (light) will have to wait a moment, as we need to compare the similarities between audio and RF once again as we cover Amplification.
Amplification is an increase in the amplitude of an RF signal. You probably remember when we discussed audio amplifiers for use in car stereo systems or guitars. These amplifiers do precisely what you'd think they do: they take a low-amplitude incoming signal, and with an external power source and the appropriate electronics, export a higher-amplitude outgoing signal. Audio amplifiers and radio amplifiers are entirely identical in this regard, but remember, any noise in the audio stream will also be amplified. The same is true for radio amplifiers.
Amplification, where an external power source is involved, is known as Active Gain. There is another type of gain called Passive Gain, which increases amplitude with no external power. Passive gain is usually accomplished with an antenna that provides more focus. An omnidirectional antenna is a type of antenna that radiates RF energy evenly in all directions (with some limitations due to the structure of the antenna). In this scenario, the energy is very "spread out."
A directional antenna can provide passive gain by radiating energy in a more specific direction, and thus not radiating RF energy in all directions. Think of the radiation pattern of an antenna as a ball of clay. With an omnidirectional antenna, the ball of clay is perfectly round, representing an antenna that radiates omnidirectionally.
If the ball of clay was to be flattened out as much as possible, it now represents a higher-gain antenna - one that radiates RF energy out horizontally, but less vertically, providing passive gain. The same amount of clay has been used to create the shape, but the shape of the clay is very different.
Attenuation
Attenuation is what occurs when an RF signal's amplitude is reduced. Attenuation usually occurs after the RF signal has been transmitted and is passing through objects that it encounters as it propagates. Essentially, attenuation is the technical term for blocking or reducing signal strength. A common attenuator for radio signals is the walls in buildings. For high-frequency signals like 5 GHz and 6 GHz frequency bands, attenuation happens very quickly as the signal passes through walls, refrigerators, or shelves full of books. Figure 4.8 shows an example of an RF wave experiencing attenuation as it passes through an object (such as a wall). Lower frequency signals experience less attenuation than high frequency signals. For example, have you ever noticed the impressive range of AM radio? It's not uncommon to listen in on a station that is up to 160 kilometers (that's just over 100 miles) away. Part of this is due to the lower frequency, and thus the longer wavelengths of AM radio. AM radio experiences far less attenuation from buildings, terrain, and atmosphere than Wi-Fi does, which explains why you can listen to AM radio in the desert in the middle of nowhere, but Wi-Fi barely works in your backyard.
Free Space Path Loss
Free space path loss (FSPL), sometimes simply called free-space loss (FSL) or just path loss, is a weakening of the RF signal due to a broadening of the wavefront. The broadening of the wavefront is known as signal dispersion. Consider the concentric circles in Figure 4.9 as representing an RF signal propagating out from an omnidirectional antenna (which we discussed briefly earlier in this chapter). Notice how the wavefront becomes larger as the wave moves out from the antenna. The broadening of the wavefront causes a loss in amplitude of the signal at any specific point in space because the energy is spread over a larger area. Therefore, the signal is weaker at point B than it is at point A.
Absorption
Microwave ovens use the 2.4 GHz frequency range to heat food. While Wi-Fi devices (which work in the same frequency band) have output levels of around 30 milliwatts (mW), a microwave oven usually has an output power between 700 and 1400 watts (W). What does this have to do with wireless engineering? The microwave oven works because RF waves are absorbed by materials that have moisture (molecular electric dipoles) in them. The absorption converts the RF wave energy into heat energy and therefore heats your food. As a result, if you've ever used a microwave oven, you've experienced absorption first-hand! Another place where you can experience absorption is in your closet, especially if it is a walk-in closet. Clothes on hangers in closets do a fantastic job of absorbing sound. Next time you go into a closet, note how "dead" the closet sounds. Closets are a great place to record voice-over work, or even record your first hit album! Back to RF: Liquids are especially absorptive, so expect water tanks, terrain, or even large groups of people to absorb radio frequency signals significantly. Fortunately, most RF systems that we use and interact with don't concentrate power as a microwave oven does, so there's no danger of being heated up a measurable amount by RF absorption. What does measurably warm up a person includes going outside for some sunshine or putting a couple more pieces of wood on the fireplace.
Reflection
Figure 4.11 illustrates this concept. As you can see, the light waves, which are electromagnetic waves, are similar to RF signals first reflecting off the object and traveling toward the mirror. Next, the light waves reflect off the mirror and travel toward your eye. Finally, your eye acts as a focusing device and brings the light waves together at the back of the eye, giving you the sense of sight. However, the critical thing to note is that what you are "seeing" is the light reflected off the object onto the mirror, and of the mirror into your eyes. The ability to see objects all around us is driven by the reflective properties of the materials and the light waves striking against them.
When an RF signal bounces off a smooth, non-absorptive surface, the signal sharply changes direction in a process known as reflection. Reflection is probably the easiest RF behavior to understand because we see it frequently in our everyday lives. You can shine a light on a mirror at an angle and see that it reflects off that mirror in relation to the angle. When you look in the mirror, you are experiencing the concept of visible spectrum reflection, which is essentially the same as RF reflection.
RF signals also reflect off objects that are smooth and larger than the waves that carry the signals. Earlier, it was noted that the wavelength impacts the behavior of the RF wave as it propagates through space. Reflection is an example of the relationship between the wavelength and the space through which the wave travels. If the space were empty, there would be no reflection, but since all space we operate in (earth and its atmosphere, at least, for now) contains some elements of absorption, reflection, refraction, and scattering are to be expected.
Typically, the object that causes reflection will be smooth and larger than the wavelength. For example, waves that interact with Wi-Fi radios in the 2.4 GHz band are about 13 centimeters in wavelength. As such, it follows that smooth objects greater than 13 centimeters in size will have a propensity to cause reflections for 2.4 GHz Wi-Fi, or any other RF activity occurring in the 2.4 GHz frequency band.
Refraction
Refraction occurs when an RF signal changes speed and is bent while moving through media of different densities. Different mediums, such as drywall, wood, or plastic, will have different refraction indices. The refraction index helps in determining how much refraction will occur.
Let's go back to the light analogy for a moment. If you wear glasses, you are wearing a refraction device. The lens refracts or bends the light, to make up for the imperfect lens in your eye. The glasses help you see clearly again because the lack of focus in the eye is corrected by the refraction caused by the lens in the glasses.
Figure 4.13 shows an RF signal being refracted. As you can see, when refraction occurs with RF signals, some of the signal is reflected, and some is refracted as it passes through the medium. Of course, as with all mediums, some of the signal will be absorbed as well.
Usually, significant refractions don't occur in indoor-only wireless systems. Instead, they're more common in outdoor systems, especially site-to-site links using 5 GHz, 6 GHz, and higher-frequency links using bands like 24 GHz. Site-to-site or long-distance wireless links typically use directional antennas with a narrow beam of focus, and as the narrow beam of RF passes through different atmospheric conditions such as changes in air pressure, or varying amounts of water vapor in the air.
The issue here is simple: if the RF signal changes from the intended direction as it's traveling from the transmitter to the receiver, the receiver may not be able to detect and process the signal. The result can be a broken connection or an increase in error rates if the refraction is temporary or sporadic due to fluctuations in the weather around the area of the link.
An excellent experiment can be easily performed that demonstrates the concept of refraction. Take a large clear bowl filled with water. Now, place a spoon (or another piece of flatware) into the water at an angle and look through the transparent side of the bowl at the spoon. What did the spoon do? Well, nothing other than entering the water, but what did it appear to do? It appears to bend. This illusion is because the light waves are traveling slower in the water medium, and this causes refraction of the light waves. It's not the spoon that's bending—it's the light that's bending because it's the light that you see.
Diffraction
Diffraction is defined as a change in the direction or intensity of a wave as it passes by the edge of an obstacle. As seen in Figure 4.14, this can cause the signal's direction to change, and it can also result in areas of RF shadow. Instead of bending as it passes into or out of an obstacle, like refraction, diffraction describes what happens as light travels around the obstacle. Diffraction occurs because the RF signal slows down as it encounters the obstacle, and this causes the wavefront to change directions.
Consider the analogy of a rock dropped into a pool and the ripples it creates. Think of the ripples as analogous to RF signals. Now, imagine there is a stick being held upright in the water. When the ripples encounter the stick, they will bend around it, since they cannot pass through it. A larger stick has a more significant visible impact on the ripples, and a smaller stick has a lesser impact. Diffraction is often caused by buildings, small hills, and other larger objects in the path of the propagating RF signal.
The RF shadow caused by diffraction can result in areas without proper RF coverage. If you are in an RF shadow area, you will not be able to receive communications from the wireless network. An example of this phenomenon indoors is an elevator shaft. Often, when the access point is on one side of the elevator, and the client is on the opposite side, the signal will be insufficient for communications in that location.
Many times, RF shadow problems can be resolved with very slight adjustments in the location of the antennas used on the access point or wireless router, or by installing additional access points. For example, if you install access points in the areas on both sides of the elevator shaft, one access point can serve one side, and the other can serve the remaining side.
Scattering
Scattering happens when an RF signal strikes an uneven surface (a surface with inhomogeneities — there's a word you can use around your family to sound smart) causing the signal to be scattered instead of absorbed so that the resulting signals are less significant than the original signal. Another way to define scattering is to say that it is simply multiple reflections. Figure 4.15 illustrates this.
Scattering can happen in a minor, almost undetectable way when an RF signal passes through a medium that contains small particles. These small particles cause scattering. Smog is an example of such a medium. The more frequent and more impacting occurrence is that caused when RF signals encounter things like rocky terrain, leafy trees, or chain link fencing. Rain and dust can cause scattering as well.
One of the most important aspects of working with wireless systems is to measure RF signal strength, whether measuring a change in power or power at absolute levels. If you begin to investigate signal strength measurements, you might find them to be confusing. In this section, we'll investigate absolute RF signal measurements such as the Watt, Milliwatt, and Microwatt, as well as ways of measuring changes in power such as decibels.
Watt
The watt (W) is a basic unit of power equal to one joule per second. It is named after James Watt, an 18th-century Scottish inventor who also improved the steam engine, among other endeavors. This single watt is equal to one ampere of current flowing at one volt.
Think of a water hose with a spray nozzle attached. You can adjust the spray nozzle to allow for different rates of flow. The flow rate is comparable to amperes in an electrical system. The water hose also has a certain level of water pressure — regardless of the amount that is flowing through the nozzle. The pressure is like the voltage in an electrical system. If you apply more pressure or you allow more flow with the same pressure, either way, you will end up with more water flowing out of the nozzle. In the same way, increased voltage or increased amperes will result in increased wattage since the watt is the combination of amperes and volts.
In wireless systems, outdoor links often use power levels measured in watts at the transmitter. In indoor wireless systems, the watt is too powerful, so many indoor systems and consumer electronics transmit in milliwatts of power instead of watts of power.
Milliwatts and Microwatts
Most wireless systems do not need a tremendous amount of power to transmit a signal over an acceptable distance. For example, you can see a 7-watt light bulb from more than 83 kilometers (50 miles) away on a clear night with line of sight. Remember, visible light is another portion of the same electromagnetic spectrum, so this should give you an idea of just how far away an electromagnetic signal can be detected.
For this reason, many systems "step down" from the Watt, and use a measurement of power that is 1/1000th of a watt, known as a milliwatt. 1 watt (W), then, would be 1,000 milliwatts (mW).
A good example of common devices that work in the milliwatt range is Bluetooth devices. Bluetooth devices can implement different classes of transmit power, depending on their intended use, as shown in Table 4.2.
Max Transmit Power | Expected Range | Used For | |
---|---|---|---|
Class 1 | 100 mW | 100 meters (328 feet) | Industrial devices |
Class 2 | 2.5 mW | 10 meters (33 feet) | Most headphones and headsets |
Class 3 | 1 mW | Fewer than 10 meters | Very low-power devices |
While a milliwatt represents one-thousandth of a watt, a microwatt (µW) is one-millionth of a watt. It represents an incredibly small amount of RF power.
Decibels
Now that we've established watts, milliwatts, and microwatts as units of RF power, why would we want to use any other units to measure signal strength and RF power? Let's take a look at a few received signal strengths that you might observe in the realm of Wi-Fi and indoor IoT networks in the 2.4, 5, and 6 GHz frequency bands:
All the values above represent completely normal signal strengths that your wireless devices work with every day. However, as an engineer, keeping track of so many decimal places is very difficult. Imagine asking someone over the phone, "What is your signal strength?" and hearing, "It's point zero, zero, zero, zero, zero, zero, zero, zero, one milliwatts of signal." That's very difficult for people to work with, isn't it?
This is where decibels come in. Now, before we explain what decibels are and how they work, let's look at the exact same signal strength values again:
Even without knowing how decibels work, those signal strength differences should be much easier to read and relay to other people. Milliwatts are very precise, but decibels are great for representing big changes in signal strength while being relatively easy to read. Let's dive into decibels and learn how they work.
First, it's good for you to know that a decibel is 1/10th of a bel and that it was developed by Bel Laboratories to calculate losses in telephone communication power as ratios. For our discussion, we'll be focusing on the decibel because wireless uses very low received signal strengths today.
While milliwatts are an absolute measurement of power, a decibel is a relative measurement that shows changes in signal strength. While milliwatts increase and decrease linearly, decibels increase, and decrease logarithmically. In other words, small numbers can mean very big jumps. For example, a 3 decibel (dB) jump in power means double the signal strength. Going up another 3 dB means the signal strength has doubled again. Moreover, an increase of another 3 dB means we've doubled the signal strength yet again. In only 9 dB of increase or gain, we've doubled our signal strength three times! This is the power of a logarithmic scale at work; you can represent big changes with small numbers.
The example above leverages the rule of 3's and 10's as a simple way to understand signal strength changes without having to resort to complex, logarithmic math. As you read the rules, keep in mind that gain is an increase in power, and loss is a decrease in power. Here are the basic rules:
Now, let's evaluate what these rules mean, and the impact they have on your RF math calculations.
First, 3 dB of gain doubles the output power. This means that:
100 mW + 3 dB of gain equals 200 mW of power
30 mW + 3 dB of gain equals 60 mW of power
The power level is always doubled for each 3 dB of gain that is added.
Rule five stated that these gains and losses are cumulative. This means that 6 dB of gain is the same as 3 dB of gain applied twice.
Therefore:
100 mW of power + 6 dB of gain equals **400 mW** of power
The following example illustrates this based on 9 dB of gain (i.e., 3 dB added three times).
Note that both formulas are saying the exact same thing.
40 mW + 3 dB + 3 dB + 3 dB = 320 mW
40 mW * 2 * 2 * 2 = 320 mW
Both formulas are saying the same thing. Now consider the impact of 3 dB of loss; 3 dB of loss halves the output power. Look at the impact on the following formula with 6 dB of gain and 3 dB of loss:
40 mW + 3 dB + 3 dB - 3 dB = 80 mW
40 mW * 2 * 2 / 2 = 80 mW
Let's look at one last example to illustrate the rule of 10's. Remember that 10 dB of gain means 10x more power, so we need to multiply our power by 10. 20 dB of gain means we'd multiply it by 10, and then multiply it by 10 again:
40 mW + 10 dB + 10 dB = 4000 mW (4 W)
40 mW * 10 * 10 = 4000 mW (4 W)
It is also important to know that the 10s and 3s can be used together to calculate the power levels after any integer gain or loss of dB. This is done with creative combinations of 10s and 3s. For example, imagine you want to know what the power level would be of a 12 mW signal with 16 dB of gain. Here is the math:
12 mW + 16 dB = 480 mW
But how was this calculated? The answer is very simple: add 10 dB and then add 3 dB twice. Here it is in longhand:
12 mW + 16 dB = 480 mW
12 mW + 10 dB + 3 dB + 3 dB = 480 mW
12 mW * 10 * 2 * 2 = 480 mW
Sometimes you are dealing with both gains and losses of unusual amounts. While the following numbers are completely fabricated, consider the assumed difficulty they present to calculating a final RF signal power level:
30 mW ÷ 7 dB - 5 dB ÷ 12 dB - 6 dB = power level
At first glance, this sequence of numbers may seem impossible to calculate with the rules of 10s and 3s; however, remember that the dB gains and losses are cumulative, and this includes both the positive gains and the negative losses. Let's take the first two gains and losses: 7 dB of gain and 5 dB of loss. You could write the first part of the previous formula like this:
30 mW + 7 dB + (-5 dB) = 30 mW + 2 dB
Why is this? Because (+7) + (-5) = (+2). Carrying this out for the rest of our formula, we could say the following:
30 mW + 7 dB + (-5 dB) + 12 dB + (-6 dB) = 30 mW + 2 dB + 6 dB
or
30 mW + 8 dB = power level
The only question that is left is this: How do we calculate a gain of 8 dB? Remember, the rules of 10s and 3s. We have to find a combination of positive and negative 10s and 3s that add up to 8 dB. Here's a possibility:
+10 + 10 - 3 - 3 - 3 - 3 = 8
Decibels to Milliwatts (dBm)
So far, we've only discussed dB, and how it is used to show changes in power. However, earlier, you may have noticed that we converted milliwatts to decibels in relation to a milliwatt (dBm) to show absolute power. We did this to illustrate that while dB looks complicated, it's very helpful for simplifying measurements of power. Let's take a closer look at decibel-milliwatts (dBm).
dBm is an absolute measurement of power where the "m" stands for milliwatts. Effectively, dBm references decibels relative to 1 milliwatt or that 0 dBm equals 1 milliwatt. Once you establish that 0 dBm equals 1 milliwatt, you can reference any power strength in dBm. Depending on the transmit power of the wireless technology that you are working with, you may find yourself using positive numbers such as 2 dBm or 5 dBm, or for low-power, indoor systems, you will see numbers dipping well into the negatives such as -10 dBm, -50 dBm, or even -90 dBm.
Because a wireless receiver can detect and process very weak signals, it is easier to refer to the received signal strength in dBm rather than in mW. For example, a signal that is transmitted at 4 W of output power (4000 mW or 36 dBm) and experiences -63 dB of loss has a signal strength of .002 mW (-27 dBm). Rather than say that the signal strength is 0.002 mW, we say that the signal strength is -27 dBm.
The formula to get dBm from milliwatts is:
dBm = 10 * log10(Power_mW)
For example, if the known milliwatt power is 30 mW, the following formula would be accurate:
dBm = 10 * log10(30) = 14.77 dBm
The result of this formula would often be rounded to 15 dBm for simplicity; however, you must be very cautious about rounding if you are calculating specific transmit powers that need a high level of accuracy. Table X provides a list of common milliwatt power levels and their dBm values.
One of the benefits of working with dBm values instead of milliwatts is the ability to easily add and subtract simple decibels instead of multiplying and dividing often huge or tiny numbers. For example, consider that 14.77 dBm is 30 mW as you can see in Table 4.3. Now, assume that you have a transmitter that transmits at that 14.77 dBm, and you are passing its signal through an amplifier that adds 6 dB of gain. You can quickly calculate that the 14.77 dBm of original output power becomes 20.77 dBm of power after passing through the amplifier. Now, remember that 14.77 dBm was 30 mW. With the 10s and 3s of RF math, which you learned about earlier, you can calculate that 30 mW plus 6 dB is equal to 120 mW. The interesting thing to note is that 20.77 dBm is equal to 119.4 mW. As you can see, the numbers are very close. While we've been using a lot of more exact figures in this section, you'll find that rounded values are often used in vendor literature and documentation.
RF Noise and Noise Floors
Let's once again think back to our examples of how RF and audio spectrum relate to each other. When you are trying to have a conversation with someone, any other sounds that you hear interfere with your ability to understand what the other person is saying. It could be loud music, a noisy car driving by, or even just other people talking. Any sound that you are not able to distinguish individually is noise. If the background noise consistently overpowers the person that you're trying to talk to, eventually you just give up, nod, and smile, because you just can't understand them.
This is exactly what noise is to a radio receiver. Whether the noise is just natural background noise (like wind blowing through the trees, or birds chirping), or another device nearby talking on the same frequency (like someone driving by with their stereo turned up very loud), noise is any signal other than the signal that the receiver is attempting to hear and decode.
Natural background noise in the environment is known as the noise floor. It's there in the realm of audio, too. Sit quietly sometime and listen, and you'll notice that even when it's quiet, there's always a distant noise in the background when you dig for it. The same is true for radio receivers.
mW | dBm (rounded) | dBm (rounded to two decimal places) |
---|---|---|
1 | 0 | 0 |
10 | 10 | 10 |
20 | 13 | 13.01 |
30 | 15 | 14.77 |
40 | 16 | 16.02 |
50 | 17 | 16.99 |
100 | 20 | 20 |
1000 | 30 | 30 |
4000 | 36 | 36.02 |
SNR and SINR
Background RF noise, which can be caused by all the various systems and natural phenomenon that generate energy in the electromagnetic spectrum, is known as the noise floor. The power level of the RF signal relative to the power level of the noise floor is known as the signal-to-noise ratio or SNR. It is the difference between the signal strength and the noise floor, so don't let the term "ratio" confuse you. It is not typically referenced as a ratio in wireless communications, but as a dB value. Figure 4.16 illustrates the concept of SNR.
When working with radio technologies, SNR is a very important measurement. If the noise floor power levels are too close to the received signal strength, the signal may be corrupted, or it may not even be detected. It's almost as if the received signal strength is weaker than it actually is when there is more electromagnetic noise in the environment. You may have noticed that when you yell in a room full of people yelling, your volume doesn't seem so great; however, if you yell in a room full of people whispering, your volume seems to be magnified. In fact, your volume is not higher, but the noise floor is less than before. RF signals are impacted in a similar way.
Technically, in wireless signal reception, SNR is defined as the difference between the noise floor and signal strength in dB. The formula for calculating SNR is simple:
SNR = signal strength value in dBm - noise floor value in dBm
For example, if the noise floor is rated at -95 dBm and the signal is detected at -70 dBm, the SNR is 25 dB.
In addition to the term SNR, the term SINR has become common. SINR is the signal to interference plus noise ratio. Like SNR, it is not expressed as a ratio, but a value in dB. The difference is that SINR is more momentary in nature than SNR. SNR looks at the noise floor at a given point in time and assumes it doesn't change drastically, which is usually a good assumption. However, sporadic interferers may generate RF energy for small bursts of time; during that time window, SINR is a better measurement of reality.
Even if the SNR would allow for the reception of an RF signal at a given data rate, the SINR may not because of the temporary interference from other devices, transmitting on the same frequency.
By themselves, sine waves are pretty boring; they repeat in the same, predictable pattern as long as they pass by our receiver's antenna. They're nice and all... but how do they actually carry data? Earlier in this chapter, you learned the characteristics of an RF wave, such as the wavelength, frequency, amplitude, and phase. In this section, we'll take a close look at how we can manipulate those aspects of a wave to carry data on it.
Amplitude Shift Keying
Earlier in this chapter, you learned that amplitude specifies the height of a radio wave. A radio transmitter can vary the amplitude of a signal by changing the output power, whether that be up to increase the amplitude and make the sine wave "taller," or down to decrease amplitude and make the wave "shorter." In both cases, the frequency and wavelength stay the same - only the amplitude changes.
These variations in amplitude can be used to carry a digital signal with Amplitude Shift Keying (ASK). In other words, a lower amplitude can be used to indicate a 0, and a higher amplitude to indicate a 1. You can see where the word "keying" comes from in Amplitude Shift Keying; the information is "keyed" by changing amplitude, as you can see in Figure 4.17. While Amplitude Shift Keying (ASK) might seem similar to Amplitude Modulation (AM), the latter is typically used for transmitting analog signals, such as AM radio. We'll discuss Amplitude Modulation (AM) later in this chapter.
Frequency Shift Keying
Another concept that was discussed earlier in this chapter was the frequency of an RF wave. So far, every wave we've shown has had a consistent frequency, but the reality of an RF wave is that the frequency can be changed on the fly. Because it can be changed, it can be used to modulate a digital signal in a process called Frequency Shift Keying (FSK). For example, a shift to a lower frequency (and thus a longer wavelength) might indicate a 0, and a shift to a higher frequency (shorter wavelength) could indicate a 1. The information is "keyed" to the frequency, as you can see in Figure 4.18.
Frequency Shift Keying (FSK) might seem similar to Frequency Modulation (FM), but FM is typically used for the transmission of analog signals, such as FM radio. We’ll discuss Frequency Modulation (FM) later in this chapter.
Phase Shift Keying
Phase Shift Keying (PSK) leverages changes in phase to convey data on an RF signal. For example, a transmitter that is modulating data with PSK will change the phase of the sine wave on-the-fly as a means for encoding information.
Figure 4.19 shows how changes in phase can be used to indicate a 0 or a 1. Binary Phase Shift Keying (BPSK) is the simplest form of Phase Shift Keying. Figure 4.20 shows a constellation diagram, which is very basic in appearance in this example but will increase in complexity as we investigate more complex forms of modulation. The dot on the left is the target for a phase of 0°, while the dot on the right is a target for a phase of 180°. While the dot is the constellation point, the reality is that the phase can land anywhere on the left side and register a 0, or it can land anywhere on the right and register a 1. This is known as an error vector magnitude (EVM).
BPSK, since it is binary, is an elementary form of modulation, but due to the large EVM, it is very forgiving when background noise and interference encroaches on the signal.
But what if we need more speed? To get more speed, we need to transmit and receive more bits of data in the same amount of time. While Binary Phase Shift Keying (BPSK) could only convey a 0 or a 1 (hence the name "binary"), the next "level up" in modulation is Quadrature Phase-Shift Keying (QPSK). Figure 4.21 shows the constellation diagram for QPSK, which now has four target vectors instead of two. Each target, instead of representing a single bit, now represents two bits at a time, such as 0, 01, 11, and 10. These groups of bits are known as symbols, and they can be transmitted in the exact same amount of time and frequency space, essentially doubling the amount of data we can transmit and receive in each period.
This added speed comes at a cost: a smaller error vector magnitude (EVM). Now, hitting the targets with phases is twice as hard because the EVM boxes are smaller. But at this point, this modulation scheme is still very simple and resilient to noise and interference.
The specifics about QPSK and BPSK aren't important for the exam. What is important is understanding Phase Shift Keying (PSK). The purpose of explaining QPSK and BPSK is simply to set up for the next section, which is Quadrature Amplitude Modulation.
Quadrature Amplitude Modulation (QAM)
Now that you understand Phase Shift Keying (PSK), let's take a look at the next level of modulation: Amplitude and Phase Shift Keying (APSK). This is identical to PSK, except it adds the variable of amplitude to PSK. With APSK, symbols (that is, groups of bits) are no longer represented solely with changes in phase, but now with changes in both phase and amplitude.
Quadrature Amplitude Modulation (QAM) is a form of APSK, and it is a type of modulation you'll see in many wireless technologies such as Wi-Fi, digital broadcast television, satellite television, DSL in plain old telephone lines, and many others.
Let's first look at 16-QAM, which is a relatively simple version of QAM that sees extensive use in technologies that you use every day. Figure 4.22 shows a 16-QAM constellation. Note that it looks just like a Quadrature Phase Shift Keying (QPSK) constellation, except now there are more than four targets. Now, there are 16, and they cannot be hit by changing phase alone. This is where the Amplitude in Amplitude and Phase Shift Keying (APSK) comes into play; amplitude determines the distance from the center of the constellation.
As you can see, APSK, and thus 16-QAM, uses both phase and amplitude modulation to hit the targets but gives the benefit of four bits per symbol. For example, a symbol might contain 0000, 0001, 0011, 0111, etc. Just like before, this added complexity increases the number of bits we can transmit and receive in the same time span, but it also decreases the error vector magnitude (EVM) boxes, making it more susceptible to noise and interference.
64-QAM, 256-QAM, and 1024-QAM work exactly the same way, adding more symbols, smaller EVM boxes, thus increasing speed potential but decreasing reliability. While Figure 4.23 only shows up to 1024-QAM, some technologies are known to use 4096-QAM, which provides a whopping 12 bits per symbol (for example, 011001000111).
Orthogonal Frequency Division Multiplexing (OFDM)
Orthogonal Frequency Division Multiplexing (OFDM) divides a certain amount of frequency space into subcarriers, which are small divisions in the operating channel that are far apart enough from each other to avoid self-interference. There are usually three types of subcarriers:
All of the data subcarriers work together to simultaneously move data. However, each one can move a different piece of data to increase transmission speeds or copies of the same data to ensure reliability, depending on the coding scheme in use. Each subcarrier is commonly occupied by either Phase Shift Keying (PSK) or Amplitude Phase Shift Keying (APSK) modulation, with all subcarriers active at the same time using whatever modulation scheme is in place.
Think of subcarriers like strings on a guitar. When all of the strings are strummed on a guitar, it plays five notes simultaneously, but they are usually all different notes. The same is true for subcarriers in OFDM: they're all transmitted simultaneously but are arranged at slightly different frequencies so each one can be heard and understood, carrying its own small piece of data.
The device at the receiving end of the transmission will demodulate all of the subcarriers and reassemble the data.
Orthogonal Frequency Division Multiple Access (OFDMA)
Orthogonal Frequency Division Multiple Access (OFDMA) is a variant of Orthogonal Frequency Division Multiplexing (OFDM) that allows data transmission to multiple, separate receivers at the same time.
With OFDM, a transmitter that needed to send unique pieces of data to multiple receivers would need to transmit data to each receiver one at a time. This could create a performance bottleneck in the time domain, as a large amount of time could be consumed while the transmitter performs each unique transmission.
OFDMA alleviates this problem by allowing the transmitter to split its channel into multiple pieces in the frequency domain, transmitting data to multiple receivers simultaneously. This is accomplished by dedicating a block of subcarriers to one receiver, another block of subcarriers to another receiver, and so on. These blocks of subcarriers are called subchannels, resource blocks, or resource units (all terms are equally acceptable).
Figure 4.25 shows a transmitter simultaneously transmitting data to two or three devices concurrently, depending on what transmissions need to occur.
Frequency Hopping
Frequency-Hopping Spread Spectrum (FHSS) devices avoid interference from other devices by rapidly moving from channel to channel as they transmit information. During data transmission, they usually consume very little channel space, only using 1 to 3 MHz of frequency space. This makes them sound very similar to a narrowband transmitter like an old 2.4 GHz cordless phone, or a 5 GHz analog video camera, but unlike a narrowband transmitter, they do not stay still.
Frequency-hopping (FHSS) devices use varying methods to determine a pattern of channels to hop to and from. The transmitter will tune to a channel, dwell on that channel for a certain amount of time, tune to a new channel, dwell on the new channel for a certain amount of time, and repeat the process over and over, constantly hopping all over the spectrum that they're designed to work in. The amount of time that an FHSS transmitter spends on a single channel is called dwell time. Figure 4.26 shows how an FHSS device dwells on a channel for a short time before moving on.
Some FHSS devices use a predetermined pattern of channels that both the transmitter and receiver know about. They then stay synchronized by repeating the pattern. A good example of an inexpensive consumer electronic device that does this is a wireless video baby monitor.
One of the most famous examples of FHSS is Bluetooth, which uses Adaptive-Frequency Hopping Spread Spectrum, a form of FHSS, to detect "bad channels" and avoid them. This can allow Bluetooth devices to dynamically avoid interference.
The primary advantage of FHSS is that it allows devices to avoid or cope with interference. Narrowband devices stay rooted in one frequency, and if they encounter interference, there's nothing they can do about it. FHSS constantly moves, limiting its exposure to interference. If an FHSS device does encounter interference, it will only be for a tiny moment, before the device moves on to the next channel.
Cellular Modulation Methods
Beyond all of the other modulation types that have been discussed here, there are two additional modulation methods that are often seen in the realm of cellular networks.
First is Time Division Multiple Access (TDMA), in which all wireless stations on the network share the same transmission frequency. To avoid stations transmitting at the same time and corrupting each other's signals, all of the stations essentially take turns transmitting on the channel, all controlled by a centralized authority. TDMA was used in legacy 2G GSM cellular networks, but it lives on in a variety of other technologies today, such as in some point-to-point wireless networks. Some point-to-point wireless bridges that use Wi-Fi can be placed in a dedicated TDMA mode to increase performance.
Instead of giving each device a time slot, Frequency Division Multiple Access (FDMA) provides separate sub-channels for each transmitting device.
TDMA and FDMA are illustrated in Figures 4.27 and 4.28 respectively.
The other notable modulation method for modulation in cellular networks is Code Division Multiple Access, or CDMA. CDMA uses a Walsh Code to convert user data into a series of chips. Chips from each user are then converted into simple waveforms, which are then merged into a composite waveform for transmission. When the composite waveform is received, the Walsh Code is then reapplied to recover the original data.
CDMA is used in WCDMA, CDMA 2000, 1xEVDO, and HSDPA/HSUPA, all of which are 3G cellular technologies.
Chirp Spread Spectrum (CSS)
CSS is used in LoRaWAN networks when the LoRa PHY (as opposed to the FSK PHY) is in use. CSS is based on the concept of a chirp, which would be a burst-based rising or falling signal. The rising or falling is that the signal starts at one frequency and rises or falls to another during the chirp. The range of frequencies across which the chirp is spread is the channel bandwidth. This modulation can be demodulated at distances exceeding several kilometers with moderate output power and the right antennas and elevation.
In LoRa modulation, the spreading factor (SF) determines the speed of the chirp or how long it's on air. That is, how long do we take to spread the chirp over the used frequencies. A lower SF equals a faster sweep rate or chirp rate and a higher SF equals a slower sweep rate or chirp rate. A faster sweep rate results in higher data rates in the same bandwidth constraints.
Think of it like this: with a slower sweep rate, the signal stays on each frequency longer. Would it be easier to properly detect a signal that is there on each frequency for a longer period or a shorter period? The answer is, of course, the longer period. Therefore, when the signal is much weaker, we can still process it with a higher SF, which is equal to a slower sweep rate/chirp rate or more time on each frequency.
Additional details of the modulation are beyond the scope of this book, such as the number of bits per symbol and the complex relationship of bandwidth to data rate, but this information will suffice to understand the basic concept of CSS modulation.
Additional Modulation Methods
Let's take a quick look at a handful of other modulation methods that could be considered "legacy" technologies, but are still in use today.
At first glance, Amplitude Modulation (AM) might just seem like Amplitude Shift Keying (ASK), but they are two different types of modulation. While ASK modulates 1's and 0's by changing the amplitude, Amplitude Modulation (AM) is instead used to transmit an analog signal.
Today's most visible example of Amplitude Modulation feels a bit dated, but it's still readily accessible: AM radio. There are two sine waves involved. The first is the radio frequency carrier wave, which is from 535 to 1605 kHz, depending on the radio station. The second wave is the actual audio wave; the representation of human voice in waveform. The carrier wave modulates the audio wave by modifying the amplitude (height) of the wave, proportional to the audio wave. In Figure 4.29, you can see how the carrier wave "carries" the audio signal.
The advantage of AM is that it is very simple, but the disadvantage is that it is highly susceptible to any interference or atmospheric effects that modify amplitude. For example, if you've ever listened to AM radio near power transmission lines, you may notice a buzzing effect. This is interference from the transmission lines, increasing the amplitude of the AM carrier wave, which your car stereo then interprets as audio.
Frequency Modulation (FM) works very similarly to AM, except for a critical difference: instead of changing the amplitude as AM does, Frequency Modulation varies the wavelength to carry the analog signal, while the amplitude stays constant. In the case of FM radio, the analog signal is a waveform that represents music or audio programming. FM radio isn't influenced by atmospheric disturbances as easily as AM radio, because changes in amplitude don't significantly impact the frequency modulation and thus the quality of the received signal. Figure 4.30 shows an audio wave and the proportional variations in frequency.
Perhaps the oldest type of modulation discussed here is Continuous Wave (CW), which could also be called on/off carrier keying. Essentially, a CW transmission keys a signal by abruptly beginning the carrier wave, maintaining it for a specific amount of time, and abruptly ending the carrier wave again. If this sounds a lot like a radiotelegraph (that is, a telegraph that uses radio instead of transmission wires), then you are correct! CW is the exact modulation scheme that was used by radiotelegraph systems to transmit Morse code.
CW is still a viable way to transmit data. Because it is so simple, it is very robust, allowing it to function in very adverse radio frequency conditions. That said, it is extremely slow, so today's use of CW is primarily by amateur radio enthusiasts.
The following represents a number of common or interesting frequencies and frequency bands used by various technologies. It is by no means an exhaustive list; a complete list would most likely consume all of the pages of this book. Instead, this list is intended to illustrate the sheer number of various frequency band allocations that exist. Also keep in mind that this is for North America only, as other regulatory domains (such as European Telecommunications Standards Institute or ETSI in Europe) will assign different allocations to different frequencies in some cases.
Frequency Range | Technology |
---|---|
1 MHz | Amateur Radio (160 meter band) |
144 MHz | Amateur Radio (2 meter band) |
440 MHz | Amateur Radio (70 centimeter band) |
535-1605 kHz | AM Radio |
88-108 MHz | FM Radio |
600 MHz | LTE (Cellular) |
700 MHz | LTE (Cellular) |
800 MHz | LoRa and others |
900-928 MHz | Unlicensed Band (Z-Wave, consumer electronics) |
1227.60 MHz | GPS L1 |
1575.42 MHz | GPS L2 |
1700 MHz | LTE (Cellular) |
1900 MHz | LTE (Cellular) |
2100-2120 MHz | NASA Deep Space Network Uplink (S band) |
2290-2300 MHz | NASA Deep Space Network Downlink (S band) |
3.6-2.4 GHz | Wireless Body Area Network (WBAN) |
2.4 GHz | Unlicensed Band (Wi-Fi, Bluetooth, Zigbee, consumer electronics) |
2.5 GHz | LTE (Cellular) |
3.55-3.7 GHz | Citizens Broadband Radio Service (CBRS) |
4.9 GHz | Fixed and mobile services for public safety use only |
5 GHz | Wi-Fi, consumer electronics |
6 GHz | Point-to-point communications, possible future Wi-Fi expansion |
7145-7190 MHz | NASA Deep Space Network Uplink (X band) |
8400-8450 MHz | NASA Deep Space Network Downlink (X band) |
27-60 GHz | mm-Wave technologies (5G, WLAN, bridging) |
Figure 4.31 shows the common terms used for the various frequency bands. There is some disagreement over the exact ranges of these bands, but the terms are very common in RF engineering. Originally, the band frequency ranges were chosen so that wavelengths had rounded numbers at the edges of the bands (The IEEE Wireless Dictionary, James P. Gibb, Wiley, 2005). Many wireless solutions span these bands or use frequency blocks in different bands depending on their implementation.
In addition to the information in Figure 4.31, it is important to know that many RF engineers use shorthand terminology to reference various frequency ranges. The following list will help you in understanding what is meant by this colloquial terminology:
In this chapter, we began by defining what a wave is, as well as identifying the key characteristics of a wave, including frequency, wavelength, amplitude, and phase. The frequency of a wave determines how quickly the wave completes a full cycle and is usually measured in hertz (1 cycle per second), kilohertz (1000 cycles per second), megahertz (one million cycles per second), and gigahertz (one billion cycles per second). A wavelength is the distance between two adjacent identical points on a wave, such as from one crest to the next crest. Higher frequencies mean shorter wavelengths, and lower frequencies mean longer wavelengths.
Amplitude refers to the height or power of a wave. Just like loud sounds can be heard from greater distances than quieter sounds, a high-amplitude radio signal can be detected and understood at greater distances than low-amplitude signals.
While frequency, wavelength, and amplitude are characteristics of single RF waves, the phase is a comparison between two waves. Two waves that arrive at the receiver in synchronization with each other are said to be in-phase and have the effect of amplifying the signal at the receiver. Two waves that arrive 90° out-of-phase with each other will be more difficult for other receivers to understand, as the sine wave is no longer a pure form and has experienced a certain amount of corruption. Two waves that arrive at a receiver 180° out-of-phase with each other will completely cancel each other out, nullifying the radio transmission.
Just like audio amplifiers increase the volume of an audio signal, radio amplifiers increase the amplitude of a radio signal. Active Gain refers to the process of amplifying a radio signal using an external power source, while Passive Gain uses no external power source, instead relying on more focused antennas to provide gain for the radio signal.
Attenuation occurs when a radio frequency signal is reduced in amplitude. This usually happens when RF passes through an object. Free Space Path Loss refers to the weakening of an RF wave, due to the broadening wavefront, and Absorption occurs when RF energy is dissipated inside an object. Just like light reflects off of a mirror, RF reflects off of smooth, non-absorptive surfaces, sharply changing the signal's direction. Refraction occurs when RF passes from one object density to another, changing the speed of the RF, and bending it into a slightly new direction. This phenomenon is observable by placing a spoon in a bowl of water, and observing how the spoon appears to be bent in the water. Scattering happens when RF strikes an uneven surface.
Watts, milliwatts, and microwatts are all absolute units of RF power. Decibels (dB) represent a logarithmic change in signal strength. Remember the rule of 3's and 10’s:
Decibels in relation to a milliwatt (dBm) show absolute power, and greatly simplify the readability of signal strength in many cases. 1 mW = 0 dBm, and 100 mW = 20 dBm.
Noise is any signal that a receiver cannot decode or distinguish from other signals, and the noise floor refers to the ambient noise in the RF environment, whether it be natural background noise or noise from other radio devices. The definition of signal-to-noise ratio (SNR) is how much signal strength can be heard above the background noise.
Next, we discussed different ways of modulating signals on radio waves. The first was Amplitude Shift Keying (ASK), which moves the amplitude up or down to communicate 1's and 0's. Similarly, Frequency Shift Keying (FSK) changes the frequency of the wave to communicate 1's and 0's. Finally, Phase Shift Keying (PSK) changes the phase on-the-fly for the same purpose as above.
While Binary Phase Shift Keying (BPSK) can only transmit a 1 or a 0, Quadrature Phase Shift Keying (QPSK) uses FSK to shift the phase of the wave in four distinct phases. Depending on the selected phase, a target vector from a constellation of four symbols can be selected, and with QPSK, they each contain two bits instead of just one.
Quadrature Amplitude Modulation (QAM) couples both Phase Shift Keying and Amplitude Shift Keying together to add more points to the constellation, which in turn makes symbols bigger. 16-QAM has a constellation of 16 symbols with four bits each. 64-QAM offers 64 constellation points, and so on.
Orthogonal Frequency Division Multiplexing (OFDM) uses subcarriers to transmit a lot of data at once, and Orthogonal Frequency Division Multiple Access (OFDMA) can split the channel into Resource Units to transmit data to multiple receivers at the same time.
Frequency Hopping Spread Spectrum (FHSS) devices use a narrow bandwidth to transmit but hop all over their allocated space to avoid interfering with other devices. The hopping is usually very rapid, minimizing the chance of receiving interference.
Amplitude Modulation (AM) is exactly what an AM radio uses; an analog signal is carried on amplitude variations in the carrier wave. Similarly, Frequency Modulation (FM) varies the frequency of the carrier wave to convey an analog signal.
RF hardware can be considered at different detail levels. We will begin this chapter by exploring these levels. Next, we will investigate the inner components of a wireless device, including the chips and circuits that provide for RF communications. We will then move to the link types created by various wireless devices and conclude the chapter with a general summary of RF device types (the highest layer of hardware abstraction).
Understanding the functionality of RF hardware and the various hardware types is essential for the CWISA exam; however, it is also vital when administering wireless networks. As a wireless solutions administrator, you will encounter scenarios where you must select appropriate equipment and replace equipment when the in-use hardware is no longer available. In such scenarios, it is essential that you understand the functionality of the wireless devices and implement new devices that meet the needs of the organization.
RF hardware can be considered from many detail levels, but this chapter will focus on three levels:
At this level, you explore the individual chips and circuits that make up a radio. Understanding the hardware that provides functionality for wireless communication assists the wireless solutions administrator in making effective decisions. The section titled Basic RF Hardware Components (Circuit Board Level) will address this level.
At this level, you explore the wireless communications functionality provided by the RF hardware or system. Does it provide bridging functionality, mesh functionality, ad-hoc communications, or something else? The section titled RF Link Types (Use Category) will address this level.
At this level, you explore the various devices, as a whole unit, and the capabilities they provide and the features they offer. For example, a wireless sensor is a specific device type that may participate in a mesh or other wireless network. As another example, a Bluetooth device may be used in a location tracking system, and it may not connect to a network continually (on-demand). The section titled RF Device Types will address this level.
The remainder of this chapter will explore these three levels in detail. Understanding this information from a conceptual and practical perspective will allow you to grasp better any RF hardware you work with in the future.
The circuit board, in an electronic device, is the foundation on which circuits, transistors, chips, resistors, sensors, and other components can be placed, or to which they may be connected, to interact with each other. In mass manufactured devices, a printed circuit board (PCB) is used, and these can be seen inside of nearly every wireless device manufactured and sold today. Figure 5.1 shows an example of an assembled PCB used in a Zigbee access point for a wireless sensor network (WSN). The PCB brings together radio chipsets, filters, amplifiers, resistors, transistors, and various other components depending on the needs of the device. For wireless devices, antennas may be printed into the PCB, or they may be attached to it as needed.
Additional board types may be used, but in production IoT devices, PCBs are the most common form.
The following sections explain some of the most common components used to build radios, antennas, amplifiers, attenuators, and splitters, which are all used in different ways within wireless devices and wireless solutions.
RF-based wireless devices use radios to transmit and receive signals. A wireless link is used to transport information between the two nodes in the link. Information is encoded and transported across a carrier wave as a signal. A transmitter sends a signal to be received by a receiver. A radio that can both transmit and receive is called a transceiver (a combination of the words transmitter and receiver).
The transmitter is sometimes called the source, and the receiver is sometimes called the destination or the sink. In most wireless links, transceivers are used on both ends of the link because they send signals back-and-forth to each other. This behavior is not always the case, for example, a Bluetooth Low Energy (BLE) beacon may be transmitted and received by another device, but the receiving device may not reply in any way using BLE (though it may take action using another wireless radio within the same device).
A solution that transmits only in one direction (from the transmitter to the receiver) is known as a simplex system. When transmission can occur in both directions, it is known as a duplex system. A duplex system can be either half-duplex or full-duplex. A half-duplex system can either transmit or receive at any given moment, but it cannot do both concurrently. A full-duplex system can both transmit and receive concurrently. Both half-duplex and full-duplex wireless systems are categorized as transceivers.
Technically, the radio is the part that generates the RF signal, and the antenna is that part that "leaks" the signal into the environment. While the radio can process an incoming signal, it requires an antenna to "capture" that signal from the environment.
Three categories of RF signal radiators are commonly defined:
Incidental Radiators: Electric or mechanical devices that generate RF energy and radiate it into the environment, but they are not designed to produce RF energy. According to the FCC, "An incidental radiator (defined in Section 15.3 (n)) is an electrical device that is not designed to intentionally use, intentionally generate or intentionally emit radio frequency energy over 9 kHz. However, an incidental radiator may produce byproducts of radio emissions above 9 kHz and cause radio interference."
Unintentional Radiators: Electric devices that generate electrical or radio frequency signals that are intended to be contained within the system or a conductive link (such as a wire-based cable) and not radiated into the environment, but they may emit some RF energy in spite of the intended design. According to the FCC, "An unintentional radiator (defined in Section 15.3 (z)) is a device that by design uses digital logic, or electrical signals operating at radio frequencies for use within the product, or sends radio frequency signals by conduction to associated equipment via connecting wiring, but is not intended to emit RF energy wirelessly by radiation or induction. Today the majority of electronic-electrical products use digital logic, operating between 9 kHz to 3000 GHz and are regulated under 47 CFR Part 15 Subpart B."
Intentional Radiators: Electric devices designed to generate radio frequency signals and transmit them into the environment. According to the FCC, "An intentional radiator (defined in Section 15.3 (o)) is a device that intentionally generates and emits radio frequency energy by radiation or induction that may be operated without an individual license. Examples include: wireless garage door openers, wireless microphones, RF universal remote-control devices, cordless telephones, wireless alarm systems, Wi-Fi transmitters, and Bluetooth radio devices." The wireless solutions discussed in this book fit into the intentional radiator category.
Why is it important to know about these three categories of radiators? Because all three result in RF energy or signals in the environment at various amplitudes. When troubleshooting a wireless interference problem, the interferer is not always an intentional radiator. Often, the interferer is in the incidental or unintentional radiator categories, and the trained wireless solutions administrator must remember to look for such interferers as well.
To understand the components in a logical radio transceiver, consider the block diagram in Figure 5.2. The figure is not intended to represent any specific wireless device, but to explain the parts and their purposes. The listed components will be explained in the remainder of this section.
Important components used to build a radio system or transceiver include:
Amplifiers include low-noise amplifiers (LNAs), Intermediate Frequency (IF) amplifiers and other amplifiers. The purpose of an amplifier is to increase the amplitude (power or strength) of the RF signal for transmission or during the reception.
Every component added in the radio chain has the potential (and usually does) to reduce the SNR at a receiver. The component itself adds noise as the RF energy passes through it. The difference between the SNR entering a component and the SNR exiting a component is called the noise figure. Noise figure is the same as noise factor except that noise factor is expressed as a ratio and noise figure is expressed in dB.
LNAs are used in the receive chain at or near the entry of the signal into the radio chain. An LNA is designed to amplify the signal for further processing while introducing as little noise as possible. Receivers can process very weak signals, in large part, due to the use of LNAs. An LNA may amplify the signal by 50 dB or more with a noise figure as low as 1 dB. Therefore, if a received signal is -90 dB with an SNR of 10 dB and it is amplified by 50 dB with a noise figure of 1 dB, the resulting signal will be -40 dB with an SNR of 9 dB. If the amplifier had a noise figure of 5 dB, the resulting SNR would only be 5 dB. As you can see, the noise figure in an LNA is crucial and impacts the overall receive sensitivity of the receiver significantly.
At this point, you are probably wondering why we are going into all the details of the components that make up a radio. Why not just say that a transceiver can transmit and receive and leave the rest out? The answer is simple: understanding the components in the system and the impact they have on signal processing helps you to understand why one vendor's device may be superior to another vendor's device. Specification sheets may help if they list details like receiver sensitivity, but they may not.
Consider, for example, a wireless device that uses five internal primary components (amplifiers, filters, etc.). Imagine that the device has an average noise figure of 1.9 dB for each component. That's a total of 9.5 dB loss in SNR due to the cumulative noise figure. Further, another device has an average noise figure of 1.1 dB for each component. That's a total of 5.5 dB loss in SNR due to the cumulative noise figure. The difference in SNR between these two devices means that the latter device may be able to achieve higher data rates or even connect when the former device cannot. This concept is the foundation of receiver sensitivity.
So, why wouldn't every vendor use the better-quality components? The answer lies in total cost of development. For example, reviewing websites in 2018, an LNA with 40 dB of gain and a 0.8 dB noise figure sold for $936.55 (this is an inline external LNA, hence far more expensive than the internal components used within devices). The same manufacturer was selling a 50 dB of gain LNA with a 1 dB noise figure for only $524.65. While the individual component prices are not as high for PCB implementations, the price variance added up over thousands of units adds up quickly. This reality (and the software factor) is why consumer-grade devices are often inferior to enterprise-grade devices, and it's also part of the reason for the significant price difference between them.
The result of this knowledge is an understanding that you must test equipment whenever possible before purchasing one hundred or ten thousand units. Imagine the impact on your networks and users if you purchase several thousand units that are significantly inferior to those that cost only a few dollars more. Lab testing, in your environment (not with high-end engineering gear), can help to validate that a device performs as required.
-Tom
Filters are used to limit the signal to the desired frequencies. Several types of filters can be used, but we will focus on image rejection filters and bandpass filters here.
Before looking at filters specifically, it is important to understand the concept of an intermediate frequency (IF). For simpler processing, many radio systems down-convert the received signal to an IF. The IF is then processed for actual demodulation.
Using an IF provides several possible benefits:
The phrases image rejection or RF image rejection refer to the undesired frequencies received by a wireless device that are the image of the desired frequencies. The image frequencies are basically any signals the same distance on the other side of the IF used for processing in the radio. Filters perform image rejection so that these unwanted frequencies (from the mirror image of the wanted frequencies) do not introduce errors during down conversion to the IF.
Filters can also perform selectivity. In this case, such as the functionality of a bandpass filter, the filter allows the desired frequencies through. Bandpass filters help to prevent or diminish adjacent channel interference. Adjacent channel interference occurs when the next or previous channel is in use near the receiver, and the signal is sufficiently strong. Bandpass filters can help to reduce this interference as long as the adjacent channel energy is not too strong.
Oscillators generate electromagnetic waves and, when controlled, signals. In many wireless transceivers, the received signal is processed by a mixer with an oscillator (known as a local oscillator (LO)). The mixer "mixes" the LO RF with the received RF and converts to an IF. The IF is then used for actual processing of the received signal.
Analog-to-Digital Converters (ADC) and Digital-to-Analog Converters (DAC) convert received signals to baseband digital signals (ADC) on reception and baseband digital signals to RF signals (DAC) on transmission. Modulators and demodulators can be said to encompass the ADC/DAC and oscillation functions to generate RF signals at transmission and convert RF signals on reception.
Many more details could be provided related to these and other components used in RF and microwave engineering. However, the CWISA will not be building wireless devices. The CWISA will be implementing and supporting the devices. Understanding the high-level concepts presented here helps you to comprehend the functionality of a wireless device, in relation to RF signals, and better select quality hardware.
Antennas are essential to RF and microwave communications. Without an antenna, unless a receiver is exceptionally close to a signal leak, communications cannot occur. Some wireless solutions use a single antenna and others use multiple antennas on each device. This section provides essential information regarding this important component in wireless communications.
The antenna is the radiating element in an RF system. It is the component that results in the propagation of RF waves through space. It is also the device that receives the RF signals from other propagating antennas. Different antennas have different coverage capabilities and different characteristics.
If you stand on the top of a tall building, you can see for a very great distance. You may even be able to see for many miles on a very clear day. If you can physically see something, it is said to be in your visual line of sight (visual LOS) or just LOS, for simplicity. Visual LOS is also called physical LOS. This visual LOS is the transmission path of the light waves from the object you are viewing (transmitter) to your eyes (receiver). Visual LOS is an apparently straight line from your perspective, but light waves are subject to similar behavior as RF waves, like refraction and reflection, and therefore the line may not actually be straight. Consider an object you are viewing in a mirror. The object is not directly in front of you, and yet it appears to be, showing that visual LOS is not necessarily a straight line between two objects.
Because RF is part of the same electromagnetic phenomenon as visible light, behaviors similar to visual LOS exist. However, RF LOS is more sensitive than visual LOS to interference near the path between the transmitter and the receiver, particularly when creating bridge links over some distance. You might say that more space is needed for the RF waves to be seen by each end of the connection. This extra space can be calculated and has a name: the Fresnel Zone. The Fresnel Zone is only important for long-distance point-to-point links, as indoor propagation patterns ensure the signals get through to the clients as long as the entire wireless solution is implemented based on proper design principles. Figure 5.3 illustrates the Fresnel Zone concept.
Before getting into specific antenna types, we need to explore a bit more math and some concepts related to outdoor links. The Fresnel zones (pronounced frah-nell) are named after the French physicist Augustin-Jean Fresnel and are a theoretically infinite number of ellipsoidal areas around the LOS in an RF link. The first Fresnel zone is the zone with the most significant impact on point-to-point long-distance links, in most scenarios. The Fresnel zones have been referenced as an ellipsoid-shaped area, an American football-shaped area, and even a Zeppelin-shaped area. In this text, we will call Fresnel zone 1 1FZ from this point forward for simplification. Since 1FZ is an area surrounding the LOS, and this area cannot be largely blocked and still provide a functional link, it is important that you know how to calculate the size of 1FZ for your links. You'll also need to consider the impact of earth bulge on the link and 1FZ.
To calculate the radius of the 1FZ, use the following formula:
radius = 72.2 x √ (D / (4 x F))
Where:
Example:
If you are creating a link that will span 1.5 miles and you are using 900 MHz radios:
72.2 x √ (1.5 / (4 x 0.9)) = 46.6 feet
This formula provides you with the radius of the 1FZ, and doubling the result would give you the diameter, if you needed it to be calculated. However, it is important to realize that a blockage of the 1FZ of more than 40% can cause the link to become non-functional. To calculate the 60% radius, so that you can ensure it remains clear, use the following formula:
clearance radius = 43.3 x √ (D / (4 x F))
Where D is the distance of the link in miles and F is the frequency used for transmission in GHz and radius is reported in feet. Using the same example, we used to calculate the radius of the entire 1FZ, you will now see that the 60% clearance radius is only 27.96 feet. However, this leaves no room for error or change. For example, trees often grow into the 1FZ and cause greater blockage than they did at the time of link creation. For this reason, many wireless administrators choose to use a 20% blockage or 80% clearance guideline, and this is the recommended minimum clearance of the CWNP program as well.
So how would you calculate this? Use the following formula:
recommended radius = 57.8 x √ (D / (4 x F))
Once you've processed this formula, you will see that the recommended minimum of 80% clearance (recommended maximum of 20% blockage) results in a 1FZ radius of 37.28 feet in our example.
As it is always better to be safe rather than sorry when creating point-to-point links, you will probably want to make it a habit to round your Fresnel zone calculations upward. For example, we would round the recommended radius to 38 feet in our example.
You might be wondering why we calculate the radius instead of the diameter. The reason is simple: we can determine where the visual LOS resides and then measure outward in all directions around that point to determine where the 1FZ required clearance area resides. Remember, the 1FZ does not reside in a downward direction only. It might seem that way since we are usually dealing with trees and other objects protruding up from the ground as interference and blockage objects. However, it is entirely possible that something could be hanging down from a very high position — such as a bridge — and encroach on the 1FZ from above the visual LOS. Additionally, buildings and other objects can cause blockages from the sides.
For example, if you are attempting to create a point-to-point link that has visual LOS between two buildings on either side of the link in a downtown area, the two buildings may encroach on the 1FZ required clearance area resulting in insufficient signal for a consistent connection.
Another factor that should be considered in 1FZ blockage is the Earth itself. As you know, the Earth — it turns out — is round. When any two objects are farther apart, there will be a higher likelihood that the Earth is between them. This scenario is demonstrated in Figure 5.4. Note the encroachment of the earth on the 1FZ over a significant distance.
If you are creating point-to-point links over distances greater than 7 miles using wireless technologies, you will need to account for earth bulge in your antenna positioning formulas. You will not need to memorize the following formula for the CWISA examination, but you will need to know that earth bulge is a potential problem in outdoor wireless links over greater distances.
The formula for calculating the extra height your antennas will need to compensate for earth bulge is:
Height = D² / 8
Where height is the height of earth bulge in feet, and D is the distance between antennas in miles. Therefore, if you are creating a 10-mile link, you would process the following formula:
100 / 8 = 12.5 feet
Using our guideline of rounding up, we would raise the antenna height by 13 to 14 feet to accommodate for earth bulge.
To bring all the discussion of Fresnel zones together, it is important that you learn to deal with 1FZ obstructions. If the obstructions are coming up from the ground into the 1FZ and there are no obstructions anywhere above it, you can often solve the problem by simply raising the antennas involved in the communication link. For example, if there is a forest with maximum tree height of 23 feet that is between the two antennas, and there is a distance of 1 mile that must be spanned, we can calculate the needed height for the antennas, including earth bulge, with the following formula:
minimum antenna height = (57.8 x √(11 / (4 × 2.4))) + (121 / 8)
This calculation might seem complex, at first, but it is a simple combination of the recommended 1FZ clearance formula and the earth bulge formula. The result is rounded up to 7 feet. You will need to install very high towers and you will also need to monitor the forest, though it is unlikely that the trees would grow that much more into the 1FZ in a few years. Additionally, you will likely be required to acquire permits for the towers in most regulatory domains or lease/license space on existing towers.
If the obstructions are coming into the 1FZ from the sides — such as buildings intruding into the pathways, you will have to either calculate the 1FZ for a different frequency to see if you can get the clearance, or you will have to raise the antennas above the buildings. You may also be able to create a multi-hop link to "shoot" around the buildings if you can gain access rights to a third location that can be seen (RF LOS - including 1FZ) by both of your locations.
Notice that it was an option to calculate the 1FZ with a different frequency. Because the Fresnel zones are a factor of wavelengths (hence frequencies) and not a factor of antenna gain or beamwidth (covered later in this chapter), which is very important to differentiate, you can often implement a point-to-point link successfully using different frequencies.
For example, the 77-foot antenna height to allow us to communicate over the top of the forest across 1 mile can be lowered to only 54 feet, if you are using devices in the 5 GHz range. However, the trade-off is in the distance. The 2.4 GHz signals are detected more easily than 5 GHz signals at a distance due to the receiving area of the antenna element and the length of the signal wave, but 5 GHz signals have a narrower 1FZ.
The formula, when using the 5 GHz band changes to the following, assuming you use a center frequency of 5.745 GHz:
minimum antenna height = (57.8 x √(11 / (4 × 5.745))) + (121 / 8)
An example of this is a link that travels only about a city block (0.1 miles). In the 2.4 GHz spectrum, the 1FZ radius would be approximately 6 feet. In the 5 GHz spectrum, the 1FZ would only be about 4 feet. Remember, this means 6 feet or 4 feet out from the center point in all directions.
Therefore, a 5 GHz link traveling between two buildings for 0.2 miles would require a space between the buildings of about 8-9 feet, while the 2.4 GHz link would need a space between the buildings of about 12-13 feet. These factors are important considerations.
Different antennas have different beamwidths, and this beamwidth is the measurement of how broad or narrow the focus of the RF energy is as it propagates from the antenna along the main lobe. The main lobe is the primary RF energy coming from the antenna. It is the intended direction of propagation.
Beamwidth is measured both vertically and horizontally, so don't let the term width confuse you into thinking it is a one-dimensional measurement. Specifically, the beamwidth is a measurement taken from the center of the RF signal to the points on the vertical and horizontal axes where the signal decreases by 3 dB or half power.
In the end, there is a vertical and horizontal beamwidth measurement that is stated in degrees. Figure 5.5 shows both the concept of the beamwidth and how it is measured and Table 5.1 provides a table of common beamwidths for various antenna types (these antenna types are each covered in detail later in this chapter).
Antenna Type | Horizontal Beamwidth | Vertical Beamwidth |
---|---|---|
Omnidirectional | 360 degrees | 7 to 80 degrees |
Patch/Panel | 30 to 180 degrees | 6 to 90 degrees |
Yagi | 30 to 78 degrees | 14 to 64 degrees |
Sector | 60 to 180 degrees | 7 to 17 degrees |
Parabolic Dish | 4 to 25 degrees | 4 to 21 degrees |
While beamwidth measurements give us an idea of the propagation pattern of an antenna, they are less than perfect at illustrating the actual areas that are covered by the antenna. For more useful visual representations, you will want to reference Azimuth and Elevation charts. However, when textual documentation of an antenna's characteristics is desired, the beamwidth is typically the best choice.
Where the beamwidth calculations provide a measurement of an antenna's directional power, Azimuth and Elevation charts, which are typically presented together, provide a visualization of the antenna's propagation patterns.
Figure 5.6 shows an example of an Azimuth chart and Figure 5.7 shows an example of an Elevation chart.
The difference between an Azimuth and Elevation chart is simple: The Azimuth chart shows a top-down view of the propagation path (to the left, in front, to the right and behind the antenna) and the Elevation chart shows a side view of the propagation path (above, in front, below and behind the antenna). Think of these charts in terms of a dipole antenna that is positioned vertically upright. If you are standing directly above it and looking down on it, you are seeing the perspective of an Azimuth chart. If you are beside it and looking at it from a horizontally level position, you are seeing the perspective of an Elevation chart.
The Azimuth chart in Figure 5.6 is reporting the different signal strength you can expect at different degrees from the antenna. For example, at 90 and 270 degrees (to the immediate left and right of the antenna's intended propagation direction), you will see a loss of approximately 20 dB. Directly behind the antenna, at 180 degrees, you will see a loss of approximately 35 to 50 dB. This is a sector antenna and is intended to propagate its energy in one direction but in a fairly wide path.
The Elevation chart in Figure 4.5 shows similar information in the vertical view for the same antenna. You will notice that the pattern of propagation is very similar to the Azimuth pattern. Like most Elevation charts, it is shown with the primary radiation direction to the right. Remember, this is intended to represent you looking at the antenna's propagation pattern from the side view. You can see that this antenna has very similar levels of loss along the same degree levels as the Azimuth chart.
The isotropic radiator is a fictional device or concept that cannot be developed using today's technology. Many say that it is not only impossible now, but because of the constraints of physics, it will always be impossible. While the future may be debatable, we know that you cannot currently create an antenna that propagates RF energy equally in all directions. This truth is due to the fact that the antenna must have some length (it must exist) and it must receive power from some source (it must be connected to something). These two constraints alone make it impossible to create an isotropic radiator at this time.
Even though we cannot create such a device, it is a useful theoretical concept in that we can use it as a basis for measurements. In fact, dBi — as was stated earlier in the book — is a measurement of the gain of an antenna in a particular direction over the power level that would exist in that direction if the RF energy were propagated by an isotropic radiator. In other words, dBi is a measurement of the difference between the power levels at a point in space generated by a real antenna versus the theoretical isotropic radiator. Since we can all agree on the behavior of an isotropic radiator, we can all use it as a basis for such power level measurements. Figure 5.8 illustrates the concept of the theoretical isotropic radiator.
The sun is often used as an analogy of an isotropic radiator. While this is an acceptable analogy, specific theories in physics — such as the hairy ball theorem — would even exclude the sun from being a true and complete isotropic radiator. However, it is one of the objects we've found to be closest to an isotropic radiator in that light indeed propagates from it in all directions. If we could analyze that light at the molecular level — or even the individual wave level — it is questionable as to whether the rays are truly radiated "equally" in all directions.
A factor that significantly impacts the performance of RF antennas is the polarization of the antennas. Antenna polarization refers to the physical orientation of the antenna in a horizontal or vertical position — typically. Technically, it's about how the antenna is designed to be used, but in practical application, it's about how you position it.
You'll remember from previous discussions in this book, that the electromagnetic wave is made up of electric and magnetic fields. The electric field forms what is known as the E-plane, and the magnetic field forms what is known as the H-plane. The E-plane is parallel to the radiating antenna element, and the H-plane is perpendicular to it.
Simply stated, the E-plane runs alongside the antenna regardless of how you position it. If you put the dipole antenna upright, the E-plane is upright alongside it; however, if you tilt it, the E-plane is tilted as well.
The E-plane, or electric field, determines the polarization of the antenna since it is parallel to the antenna. Therefore, if the antenna is in a vertical position, it is said to be vertically polarized. If the antenna is in a horizontal position (the electric field and antenna are parallel to the Earth), it is said to be horizontally polarized.
A vertically polarized omnidirectional antenna propagates the signal horizontally, and a horizontally polarized omnidirectional antenna propagates the signal vertically, which is not what you typically desire unless you are creating a bridge link between floors in a tall building. If you configure a link like this (horizontally between floors and along walls), a best practice would be to place the antennas approximately 2 feet out from the wall to prevent the wall from creating interference. The link would likely work anyway, but communications can be improved with this consideration. The spacing of 2 feet should keep the 1FZ 80% clear for up to 60-70 feet.
The impact of polarization is seen when antennas are not polarized in the same way. For example, if you have one device with the antennas positioned vertically (vertical polarization), and you have another device with the antenna down (horizontal polarization), your connectivity will be less stable and, at greater distances, may even be lost. However, in most cases, due to indoor reflections, the polarization of antennas does not have as great an impact indoors as it does with outdoor links. In outdoor links, the proper polarization of the antennas can make or break the connection.
Remember this: vertical polarization usually means that most of the signal is being propagated horizontally and horizontal polarization means that most of the signal is being propagated vertically, as previously stated. Therefore, the most popular polarization is vertical polarization because we are typically trying to send the signal along the direction of the Earth's surface, whether it is a five-meter link or a five-kilometer link.
Antenna diversity is a feature offered by many wireless devices, that allows the device to receive signals using two antennas and one receiver. In a traditional antenna diversity implementation, only one antenna is used at a time, so this should not be confused with Multiple-Input/Multiple-Output (MIMO) configurations. The device supporting antenna diversity will look at the signal that comes into each antenna and choose the signal that is best, on a communication-by-communication basis. Again, remember that there is only one receiver that has two connections and two antennas.
An additional type of diversity is Multiple-Input/Multiple-Output (MIMO) diversity. MIMO systems use more than one antenna in several different ways, but they can also support diversity. For example, a device may be a 2x3 device, which means that it can transmit on two antennas, but receive on three. Such a configuration would allow for diversity selection during frame reception providing MIMO receive diversity. Such a solution may also support maximal ratio combining (or maximum ratio combining, depending on whose whitepaper you're reading) as discussed in the following section. Wireless devices with multiple antennas can use several special techniques with the multiple antennas provided. These techniques include:
Spatial Multiplexing uses advanced algorithms to create separate data streams for each transmitting antenna with MIMO hardware. It requires multiple radio chains, which are effectively radios linked to transmitting (Tx) and receiving (Rx) antennas.
If a spatial multiplexing link is to function at the highest possible data rate, the following factors must be true:
In many systems, the "center" of the network, such as a cell tower, may have many more antennas than the devices connecting to it possess. This configuration allows for multiple user transmissions concurrently and even multiple beamformed transmissions concurrently. An example of this would be massive MIMO in some LTE and 5G deployments allowing for more multi-user MIMO (MU-MIMO) communications to happen concurrently. With a 64x64 massive MIMO configuration (or larger), many devices can communicate with the same receiver concurrently.
Transmit Beamforming is a specialized antenna technology that allows the signal to be focused on a specific destination. To use TxBF, the characteristics of the signal received at the remote wireless node must be known. Special communications called channel sounding occur between the devices to discover this information. TxBF uses multiple antennas and adjusts those antennas to simulate a sector array of antennas.
To better understand TxBF, you must first understand the phenomenon of multipath. Multipath occurs when the transmitted signal reflects, refracts, diffracts, and scatters as it travels. The result is often that more than one copy of the signal arrives at the receiver. If two copies arrive at the receiver in phase with each other, upfade occurs, and the signal strength is increased. If the signals arrive out of phase, the signal can be downfaded (resulting in a loss), corrupted, or canceled out entirely.
TxBF devices work with each other to determine how to calibrate the transmissions so that multiple signals arrive in phase. Any time a device moves, the TxBF transmission must be recalculated. The result is that TxBF is best for nomadic roaming or in non-congested networks that will not be significantly impacted by extra transmissions for TxBF calibration operations. With nomadic roaming, the clients move but remain stationary most of the time.
Maximal Ratio Combining uses antenna diversity to increase the strength of the received signal through combination algorithms. Traditional antenna diversity uses only one antenna to receive a transmission, even if both antennas receive the signal fine because only one radio exists. With MIMO devices, MRC can combine the signals of two antennas to increase signal strength at greater distances. The result is an increase in received signal quality, which can result in higher data rates and throughput.
Now that you understand several antenna systems concepts, in this section, we will cover the basic types of antennas that are available to you, including their RF propagation patterns and their intended use. Keep in mind that modern wireless solutions do not always use antennas that fit neatly into one of the three primary categories but understanding these concepts will help you understand the actual antenna propagation patterns within your solution.
Three primary categories of antennas are used today:
Also, variations on the implementation and management of these antenna types exist, which results in the sectorized and phased array antennas among others. These antenna types will also be addressed in this section. Finally, we'll review the MIMO (Multiple-Input/Multiple-Output) antenna systems that are used by many wireless solutions today.
Omnidirectional antennas, the most popular type being the dipole antenna in early days, and being internal device antennas today, are antennas with a 360-degree horizontal propagation pattern. In other words, they propagate most of their energy outward in a 360-degree pattern shaped much like a doughnut — though a very thick one, in low-gain omnidirectional antennas.
The omnidirectional antenna provides coverage at an angle upwards, downwards and directly out horizontally, as is shown in the Elevation chart in Figure 5.9.
Inspecting the Elevation chart in Figure 5.9 reveals that an omnidirectional antenna propagates most of its energy to the right and left of the antenna (from a side view) and very little energy directly above the antenna. At the same time, the Azimuth chart shows a fairly even distribution around the antenna (from a top-down view). This pattern is the common propagation characteristic of omnidirectional antennas.
Figure 5.10 shows a typical omnidirectional antenna of the dipole design.
The omnidirectional antenna is most commonly used indoors to provide coverage throughout an entire space; however, they have become more and more popular in outdoor usage for either hotspots or private access outdoor networks. Omnidirectional antennas may be mounted on poles, masts, towers, ceilings or desktops and floors.
They provide coverage on a horizontal plane with some coverage vertically and outward from the antenna. They may provide some coverage to floors above and below where they are mounted in some indoor installations.
Many wireless devices with internal antennas have an omnidirectional propagation pattern. Always consult the vendor specifications to determine the pattern implemented by a given device, when available.
Figure 5.11 shows the LORD MicroStrain WSDA-2000 gateway used to build an 802.15.4 wireless sensor network (WSN). Note the omnidirectional antenna in use.
Because all antennas use passive gain — they focus the RF energy — it is important to consider the impact of this passive gain on any antenna that you implement. In the case of omnidirectional antennas, the result is that devices directly above or below the omnidirectional antenna may have a very weak signal or even be unable to detect the signal. This behavior is due to the primary signal being focused outwardly on a horizontal plane (vertical polarization).
You can use antennas that have higher dBi gain such as 12 or 15 dBi omnidirectional antennas; however, you must keep the impact of these higher gain antennas in mind. As an example, consider the two Elevation charts side-by-side in Figure 5.12. The one on the left is from a 4 dBi omnidirectional antenna and the one on the right is from a 15 dBi omnidirectional antenna. You can see the flattening of the signal. It is very plausible that a higher gain antenna, such as the one on the right, could cause devices on the floors above and below the antenna to lose their connection.
Ultimately, when using omnidirectional antennas, choosing between a higher gain and a lower gain is choosing between reaching devices farther away horizontally (higher gain) or reaching devices farther up or down vertically (lower gain). In most situations, you'll place separate antennas (or devices with attached or internal antennas) on each floor of a multi-floor installation to get the coverage you need.
Semi-directional antennas are antennas that focus most of their energy in a particular direction. Examples include patch, panel, and Yagi antennas. (Yagi is pronounced yah-gee.)
Patch and panel antennas come in flat enclosures and can be easily mounted on walls. Yagi antennas look a lot like TV antennas — a long rod with tines sticking out; however, the Yagi antennas are usually enclosed in a plastic casing that hides this appearance.
Patch and panel antennas usually focus their energy in a horizontal arc of 180 degrees or less, where Yagi antennas usually have a 90 degree or less coverage pattern. Some Yagi antennas can be categorized as highly directional antennas as well.
Figure 5.13 shows examples of patch, panel and Yagi antennas.
The Azimuth and Elevation charts for Yagi antennas often look the same. They often have the same coverage pattern from the top-down view (horizontal coverage) as they do from the side view (vertical coverage).
Figure 5.14 shows an example coverage pattern of a 9 dBi Yagi antenna. Panel antennas usually have a similar pattern to Yagi antennas except that the "fish-like design" appears quite a bit fatter or thicker.
Semi-directional antennas are useful for providing RF coverage down long hallways or corridors when using Yagi-style antennas. They are also useful when providing RF coverage in "one" direction using patch or panel antennas. The patch and panel antennas will have some level of energy propagated behind their intended direction. This energy is known as the rear lobe. However, most of the energy will be directed inward.
For this reason, patch and panel antennas are usually mounted on outside walls facing inward when they are intended to provide coverage inside an area only. Additionally, they can be used on the outside of a building to create an "external-only" coverage area.
Creatively using Yagi, patch and panel antennas can prevent the use of large numbers of omnidirectional antennas for many situations. For example, a single patch antenna placed on a wall facing inward may provide all the coverage needed, when two omnidirectional antennas would otherwise be needed. The energy coming from the patch antenna is forced directionally inward instead of being forced in all horizontal directions equally. The RF energy is going where it is needed instead of losing a third to half of it outside the walls of your facility.
It is also assumed MIMO is not required when Yagi antennas are used. MIMO patch and panel antennas are available, but MIMO Yagi is not a typical installation.
An example of a patch antenna device is shown in Figure 5.15. This device is the LORD MicroStrain G-Link 200 and is a WNS accelerometer sensor used to track movement, impact, and other factors related to motion. It connects to a gateway like the earlier referenced WSDA-2000. It also uses the 802.15.4 protocol for Physical and Data Link layer communications.
A common misconception that enters at this point is the fear that using a Yagi, patch or panel antenna will get the signal to the remote device, but that it will not get the signal from the remote device to the local device (or the Yagi, patch or panel antenna). Stated another way, it is often assumed that you must use semi-directional antennas at the remote device if you use a semi-directional antenna at the local device; however, this is not the case.
I usually explain this by saying this, "When you place the megaphone over the antenna's mouth, it is smart enough to move it over its ear to listen." What I mean by this statement is simple: the very quality of the antenna that increases its gain in a particular direction, also allows it to "hear" better (have receive gain) from that same direction. Therefore, as Joseph Bardwell says, "If you can hear me, then I can hear you."
Highly-directional antennas are antennas that transmit with a very narrow beam. These types of antennas often look like the satellite dish that is so popular with people who do not have access to wired cable television or do not desire to use it. They are generally called parabolic dish or grid antennas.
Figure 5.16 shows examples of each antenna type.
Due to the high directionality of these antennas, they are mostly used for point-to-point or point-to-multi-point (PtMP) links. PtMP links will usually use an omnidirectional or semi-directional antenna at the center and multiple highly or semi-directional antennas at the remote sites. They can transmit at distances of 35 miles or more and usually require detailed aiming procedures that include a lot of trial and error. By positioning one antenna according to visual LOS and then making small movements at the other antenna, accurate alignment can usually be achieved.
The grid antenna provides the added benefit of allowing air to pass through the back panels so that the antenna does not shift as much as the parabolic dish in high wind load scenarios.
A sectorized antenna (or sector antenna) is a high-gain antenna that works back-to-back with other sectorized antennas. They are often mounted around a pole or mast and can provide coverage in indoor environments, such as warehouses, or outdoor environments, such as university campuses or hotspots. Figure 5.17 shows an example of sectorized antennas mounted on a pole.
A phased array antenna is a special antenna system that is actually comprised of multiple antennas connected to a single processor. The antennas are used to transmit different phases that result in a directed beam of RF energy aimed at client devices.
When mounting antennas, always abide by vendor recommendations. If the vendor provides a mounting kit, that's usually the best solution to use. However, if you have to create your own kit — because the vendor doesn't provide one, keep the following tips in mind:
As usual, if you have to climb a ladder or hang from a rafter to mount the antenna, please make sure you abide by safety best practices and regulations for your region. In many areas, this recommendation means that we abide by OSHA specifications. Whether OSHA has any influence on your area or not, please be careful, and DON'T break a leg — in this case.
Finally, it is essential to note that the RF signal radiated out of the antenna is known as Equivalent Isotropic Radiated Power (EIRP), also called Effective Isotropic Radiated Power, or Effective Radiated Power (ERP).
ERP is the radiated power in the intended direction of propagation by the antenna under test as compared to the output power of a half-wave dipole antenna. That is, how much power would a half-wave dipole antenna have to generate to equal the power of the antenna under test? The answer to that question is ERP.
EIRP is the radiated power in the intended direction of propagation by the antenna under test as compared to the output power of a theoretical isotropic radiator. That is, how much power would an isotropic antenna have to generate to equal the power of the antenna under test? The answer to that question is EIRP.
A half-wave dipole antenna already has a gain of 2.15 dB over an isotropic radiator. Therefore, ERP is always 2.15 dB less than EIRP when measuring the same antenna.
dBd and dBi are used to specify antenna gain and they are relative as the actual output power depends on the input power to the antenna.
ERP and EIRP are absolute as they are a measurement of the output power from the antenna for a fixed input power to the antenna.
RF cables are used to connect the transceiver to the antenna (and possibly other in-series devices). Cables have different levels of loss, and this should be considered when selecting the cabling for your system.
Keep the following factors in mind when selecting RF cables for your implementation:
RF connectors come in many shapes and sizes. The following types are common:
In addition, there are common variations of these types, such as reverse polarity and reverse threading. These different types exist in an effort to comply with FCC and other regulations for components used in a wireless system.
While dongles and pigtails exist, if they are used to convert from one type to another for transmission, they may constitute a breach of regulatory agency regulations.
These connectors are found on the ends of cables, the back of wireless devices, and the ends of antennas (in the case of dipole or rubber ducky antennas).
Figures 5.18-5.21 show examples of common connector types.
RF splitters are installed in series between the transceiver and the antennas. The splitter receives a single input and has two or more outputs. They may be used with sectorized antennas.
Important: RF splitters should be avoided unless absolutely necessary as they create insertion loss.
Much like wireless devices have amplifiers and even attenuators internally, these devices can be used in-line between a wireless device and an antenna.
Amplifiers are used to increase the range of bridge links in many cases. Attenuators are used to keep the ERP or EIRP within regulatory limits when the output power of the wireless radiator cannot be adjusted low enough.
When using amplifiers, it is important to ensure that the input signal is sufficiently low so that the amplifier does not cause signal compression. Alternatively, you can select an amplifier with circuitry to prevent signal compression.
Signal compression occurs when the input signal is high enough that the "top" of the signal gets "flattened" (compressed) and the quality of the signal is reduced such that it may not be receivable on the other end of the link.
Consider what happens when audio speakers are turned up too high, and the result is distortion and unintelligible sound. A similar phenomenon occurs when amplifiers over-saturate the RF signal.
When an amplifier works appropriately, based on proper implementation, it amplifies the signal linearly. The output signal simply looks like a bigger (stronger) version of the input signal. However, when saturation occurs, the signal is compressed, and the output signal no longer looks like the input signal. Both phase and amplitude may be impacted, though amplitude tends to be impacted the most.
Given that amplitude is part of Quadrature Amplitude Modulation (QAM) modulation schemes, as well as many others, saturation can result in signals that cannot be properly demodulated even though they may appear strong from a pure signal strength perspective — they are distorted.
Figure 5.22 illustrates the impact of compression, resulting from amplifier saturation, on an amplitude modulated signal. Note that the higher amplitude waveform comes out of the amplifier apparently unchanged, but it's actually compressed. The lower amplitude waveform comes out at a higher amplitude. The resulting signal has lost the variance between the peak and low amplitude levels, which can prevent proper demodulation at the receiver.
Wireless links can be created between two devices, among several devices, and on-demand. This section explains the various link types with examples of wireless solutions that utilize them.
A PtP link is one that exists exactly between two devices. Technically, an ad-hoc network may be a PtP link, if only two devices are involved in the network, but the later section on ad-hoc networks explains why it best fits in its own category.
In most cases, PtP links refer to bridge links. A bridge link is used to connect two otherwise disconnected networks. Bridge links can be created using wired or wireless solutions. Wireless bridges may use standard protocols, like IEEE 802.11, or proprietary vendor protocols.
To establish the link, both ends must support the same protocol. If one end of the PtP link is using 802.11ac and the other end is using 802.11af, they will still be unable to establish a link, because these two Physical Layers of 802.11 use very different frequencies and modulations in communications. The point is simply that both ends of the link must be speaking the same language (protocol) and using compatible hardware (radios).
Figure 5.23 shows a typical scenario where a PtP link would be beneficial. In the image, we have two buildings on either side of a forest. The buildings are high enough that antennas can be placed on top of each building and a bridge link can be established between them.
The actual wireless solution used to implement a PtP link, like the one in Figure 5.23, can vary significantly. Selecting the right solutions is a factor of:
Distance is a critical factor. The 802.11a 5 GHz bridges from EnGenius, still available at the time of writing as many PtP links require only 30-50 Mbps data rates, have a range of up to 1 mile or 1.6 kilometers. They include IP5580 rating enclosures for use outdoors in the weather.
Figure 5.24 shows these bridges.
The next example is an Avalan Wireless AW900xTR, which is a 900 MHz wireless bridging solution that offers only 1.536 Mbps data rate with 935 Kbps throughput. As you can see, this device would likely only be acceptable when very low-speed communications are required. However, it can create links up to 15 miles or 24 kilometers with the right antennas and proper clearance of the signal path.
The AW900xTR also supports local connections when an omnidirectional antenna is used instead, which can suffice for remote connections to some monitoring devices or other low data rate devices. This device is shown in Figure 5.25.
Devices are available offering higher data rates with an acceptable range as well. The vital thing to keep in mind is that two specifications are typically provided (among others) for wireless bridges:
In most cases, specification sheets list the maximum range and the maximum throughput only. The reality is that you may get one but not the other.
Some bridges will rate shift in steps, and others have only two rates: a high rate (like 150 Mbps) and a low rate (like 1 Mbps). Carefully select the bridges that meet your needs.
It is also essential to ensure that the bridge you select is compatible with your network. Given that the vast majority of wireless bridges, regardless of operating frequencies and protocols in use, have Ethernet connectivity, they will typically work with most networks.
However:
If you are building a 20 Mbps bridge link, you do not want a 100 Mbps Ethernet connection.
Licensing is not an issue as long as you select a bridge solution that uses unlicensed frequencies in your regulatory domain.
Common unlicensed frequencies used today include:
However, you must be aware of your local regulations and use only devices authorized for operation in that regulatory domain.
If you choose to use licensed frequencies, the process is far more arduous, and you are typically better off using a managed network operator (MNO) that handles all of the licensing for you.
PtMP links include a central device to which other devices connect. The typical wireless LAN (WLAN) with an access point (AP) and clients is an example of a PtMP implementation. PtMP bridge links can also be implemented if the wireless solution allows for it. The previously referenced AW900xTR solution can implement PtMP configurations.
The MetroLinq 2.5G 60 GHz base station is an example of a PtMP bridging solution.
Shown in Figure 5.26, it has a 120-degree beamwidth allowing for significant coverage. At the same time, it uses 16 narrower beam steps of 10 degrees each reducing noise in the communications with remote clients (which are remote bridges in PtMP bridging configuration).
To assist in the alignment of the base station with remote nodes (which MetroLinq calls clients — even though they are bridges), a mobile iOS application is provided (shown in Figure 5.27).
With this application, you can view:
The bridges can be considered "aligned" when the signal strength is best.
Given that this solution is a 60 GHz solution, range is limited to less than 1 kilometer in most cases, though the vendor indicates that some have successfully established consistent links with 80% efficiency at 2.5 kilometers.
An additional feature of the MetroLinq bridging solution is failover to 5 GHz.
This feature will likely be seen more frequently for 60 GHz devices. 60 GHz is susceptible to significantly reduced SNR during rainstorms and 5 GHz is much less susceptible to it.
Therefore, to ensure the link can continue to operate — even though it will drop from a maximum data rate of 4.62 Gbps to 866 Mbps — the link can switch over to the 5 GHz band.
When implementing a PtMP solution, it is important to consider airtime management.
For example, 802.11 uses a contention algorithm to minimize interference among clients. In outdoor bridge PtMP deployments, some algorithm or scheduling solution should be implemented to prevent collisions at the central point of the various bridge links.
The term ad-hoc is a general use term meaning, "when necessary or needed," or "created for a particular purpose as necessary." In other words, it is a temporary and dynamic action or process that is implemented when required.
In wireless networks, an ad-hoc (also spelled ad hoc) wireless network is a group of wireless devices that dynamically create a network without requiring an existing network infrastructure to function.
Ad-hoc networks may be subcategorized as:
Stationary ad-hoc networks are those formed between devices, and the devices do not move at all or move very short distances while participating in the network. The network may be formed and continue to exist for hours or even days.
Examples:
Typically, in a stationary ad-hoc network, all of the participating devices can communicate directly with the other devices. There is no need for a dynamically built routing solution.
The IEEE 802.15.4 standard defines protocols that can be used to implement a peer-to-peer network like the one we're calling a stationary ad-hoc network. All of the devices can communicate directly with all of the other devices in the network.
802.15.4 provides all that is needed to implement a stationary ad-hoc network; however, it does not provide sufficient capabilities alone to implement a full-scale routed ad-hoc network where devices communicate with remote devices using intermediaries.
MANETs often require these extra layers to be defined.
Example:
A WSN where some or all of the sensors are mobile, and the sensors form an ad-hoc (though it is sometimes a mesh network instead) network among them for routing (or hopping) to other destinations in the network.
Imagine a WSN in a large warehouse:
This functionality is the very definition of a MANET.
For industrial, agricultural, military, emergency response, vehicular, healthcare, and many other ad-hoc implementations, the MANET model is beneficial.
The IEEE 802.15.4 standard defines protocols that are often used in ad-hoc networks; however, it does not define how multi-hop communications might occur when the Physical and Data Link layers of 802.15.4 are used.
Such capabilities are often:
Two essential differentiators exist between an ad-hoc network and a mesh network. First, mesh network nodes typically have more than one radio today. With this implementation, the mesh nodes can communicate with multiple other nodes concurrently. Second, one or more mesh node(s) will have a connection to the rest of the network and possibly the Internet. Typically speaking, the primary difference is that a network can be ad-hoc or MANET without connecting to any other network, but a mesh network is usually connected to other networks.
This primary point of difference is where much confusion enters as you begin to evaluate vendor equipment.
It is most important for the wireless solutions administrator to understand the vendor's architecture and ensure it meets the requirements of the desired solution.
Whether the vendor calls it by the right name or not, you must be able to select and implement the right technology for your needs.
Mesh networks are made up of mesh nodes. These nodes may have different roles:
Mesh nodes providing connections to other networks are referred to as:
Depending on the vendor terminology and the mesh protocol implemented. Examples of mesh networks include:
On-demand wireless link types are those that are created, data is transmitted, and the link is torn down (ended). Many wireless networks are defined as on-demand, and the meaning is simply that they are there when users want to use them.
For the CWISA exam, we are referencing devices that enter a sleep state such that they may not connect for several minutes, days, or hours, until the actual need to transmit data arises.
Such behavior can be seen with some wireless sensor devices. For example, a motion-tracking device that is battery powered needs to be operational with those batteries for as long as possible.
Powering the wireless module down completely until motion is detected can be a useful power-saving mechanism.
When motion is detected:
In other scenarios, wireless sensors may store data locally and only transmit the data at regular intervals, such as:
Every five minutes.
Every hour.
Even once each day.
Environmental sensors may fit into this category if real-time monitoring is not required.
Example: Environmental research projects may require only that the data is captured for analysis once each week or month. Given that the data is analyzed weekly or monthly, a single transmission of the data each day will suffice.
Using this on-demand model, battery life can be extended to years for many sensors.
The final topic of this chapter is the RF device types. The purpose of this section is to familiarize you with common terminology related to wireless devices.
We will explore:
In networking terminology, any device that connects to the network and participates in any way is a node.
Network nodes include:
Some define a node only as an interconnection point on the network, but the more common definition is that a node is a device connected to the network. Another common term for a node is endpoint, but this term is not correctly inclusive and is not synonymous with node.
A router is a node on the network, but it is not an endpoint.
Nodes include endpoints, but they also include other devices that participate in the network, including:
Generally speaking:
If the device is active on the network — it is a node. The remaining device types are particular types of nodes.
Traditionally, networking professionals think of infrastructure devices as:
However, we are now far past the wired networking age, and many wireless devices can also be considered infrastructure devices.
Such devices are not clients on the network, but instead, they provide network access for the clients, or they coordinate or control network operations.
In a wireless network, the following devices may be considered infrastructure devices:
Two basic kinds of mesh nodes or devices exist:
These devices:
These devices:
In this scenario:
Notice:
Client devices are those that use the resources of the wireless network but do not help to build the wireless network.
Therefore, devices like:
can all be client devices.
Purpose of Client Devices in Wireless Networks
When deploying any wireless solution, the administrator must remember that the network is being built to support the use of the network.
When this includes client access, the client devices should be evaluated so that the network can be designed for optimal performance.
When possible, the following factors should be discovered:
Supported protocols
Supported speeds
Supported security
Data requirements
Mobility requirements
Location
In some cases, more information will be required, but these six items form the foundation of what must be known about all clients to support them well.
In this chapter, you learned about RF hardware, including the components used within wireless devices and the various device types and their uses.
You also explored:
This information will help you to better understand the chapters still to come as they explain specific use cases of wireless technologies, including:
This chapter is not about a single technology that is a short-range, low-rate, and low-power solution. Instead, it explains the various technologies that fit into one or more of these categories as well as some long-range protocol examples.
In later chapters, wireless sensor networks are discussed in more detail than before and these IoT network types often employ the solutions addressed here.
Before we discover specific network types that fit into one or more of these categories, we will explore the factors that impact speed, range, and power consumption in RF communications.
Several factors impact the speed of wireless transmissions. This speed is known as the data rate. The primary factors influencing speed are:
In this section, each one will be explained with sufficient detail to allow you to understand data rates regardless of the wireless solution in question.
The modulation used within a system significantly impacts the data rates available. In digital wireless communications, the concept of a symbol is fundamental. A symbol is simplified as a waveform that represents bits. Modulation Schemes Support the Following Common Levels for Symbol-to-Bits Mapping:
The first key to accomplishing higher data rates through various modulation methods is increasing the number of individual waveforms that can be mapped to a set of bits.
In a perfect environment (extremely low noise), this could be extended to:
Since distinguishing many amplitude levels becomes difficult in noisy environments, we combine:
This results in more bits per symbol.
Example:
The time slot, slot time, or symbol period is the minimum window of time defined in the modulation scheme to look for symbols.
Time slot = 0.5 milliseconds
7 symbols per time slot → 14 symbols in 1 millisecond → 14,000 symbols per second
Depending on bits per symbol:
Advanced modulation schemes like OFDM use multiple carriers (subcarriers).
Example:
Resulting rate = 33,600 kbps or 33.6 Mbps
Modulation methods impact not only data rates but also:
More complex modulation:
Less complex modulation:
Another factor in the speed of wireless links is a concept called coding. Coding is the process used to select the bits that represent bits in transmission. That is, if I desire to send the value 1, I may actually transmit 1010. If I desire to send a 0, I may actually transmit 0101. Why do this? The answer is found in stability.
For example, if I transmit a 1 for a 1 and the transmission is interfered upon, I cannot properly demodulate the data. However, if I transmit a 1010 pattern to represent a 1, at the receiver, I know that a demodulated 1- -0 is equal to a 1 even though I couldn't demodulate the second and third values. I also know that a - - -0 is a 0 even though I couldn't demodulate the first, second or third values.
In wireless modulation methods, coding is defined by a coding rate, which is a ratio of information bits to transmitted bits. For example, a coding rate of 1/4 indicates that I am sending one information bit for every four transmitted bits. A coding rate of 3/4 indicates that I am sending three information bits for every four transmitted bits. The latter is better for speed, and the former is better for resiliency.
Like modulation itself, noisy environments and long-range links can benefit from coding rates that result in increased resiliency.
The next factor in wireless link speeds is the channel bandwidth. Generally speaking, wider channels result in higher speeds, and narrower channels result in lower speeds. Of course, this assumes that the same modulation and coding rate is used in the wider or narrower channels.
Many low-rate networks use channel widths of less than 5 MHz. High-rate networks typically use channel widths of 20 MHz or more. Remember that OFDM modulation methods use subcarriers measured in kHz within the defined channel bandwidth.
Signal-to-Noise Ratio (SNR) and Signal-to-Interference and Noise Ratio (SINR) impact speeds in a link as well. While oversimplifying the situation, links with better SNR can transmit faster than links with lower SNR. At more advanced levels of knowledge, it is good to know the details behind this, but, for now, this will suffice.
Wider channels often require higher SNR to achieve maximum data rates. This fact is important and impacts decisions in different network types.
For example, in 802.11 networks, you can use 20, 40, 80, or even 160 MHz channels. The wider channels require increased SNR to maintain high-rate links.
Spatial Multiplexing (SM) uses advanced algorithms to create separate data streams for each transmitting antenna. It requires multiple radio chains, which are effectively radios linked to transmitting (Tx) and receiving (Rx) antennas. Spatial multiplexing is the use of multiple, concurrent spatial streams in transmission.
Considering what we've covered so far, including modulation, coding, channel bandwidth, and SNR/SINR, the added element of spatial streams is a multiplying factor.
Generally speaking, when you have multiple spatial streams available, you calculate the data rate by multiplying a single stream data rate times the number of spatial streams.
If a single stream data rate is 24 Mbps, then three streams would provide 72 Mbps.
It is important to remember that the use of multiple spatial streams may increase the required SNR for maximum data rates in some wireless solutions.
Now that you understand the individual concepts that impact wireless link speeds, you can select the appropriate gear for a specific scenario.
Consider the following example specifications for a link:
Upon investigation, you find the possible solution listed in Table 7.1.
Because a wireless sensor is in question, low-rate communications are acceptable. The range requirement of 200 meters is easily met by the 1200-foot range of the devices shown (200 meters is approximately 660 feet).
Of course, the selection of a solution is about more than simply meeting the technical requirements of a single link. The CWISA must consider other devices that may be required to communicate on the network and ensure that the selected solution can support the other devices as well.
If that is true, the solution shown in Table 7.1 meets the specifications and may be a good choice for your scenario.
The range of a wireless connection is a reference to the physical distance over which a connection may be maintained.
Three primary factors impact the maximum capable range of a wireless link, and one impacts the desired range.
The capable range is the distance at which a link can be maintained regardless of performance.
The desired range is the distance at which a link can be maintained while achieving performance goals.
The first three factors are:
The fourth factor that impacts the desired range is:
The output power is the actual amplitude of the RF signal generated by the radio before entering the antenna. You can adjust the output power of many devices; however, some devices provide no control over this attribute.
When no control is provided, you must know the fixed output power setting and ensure the receiver on the other end matches it as closely as possible.
Higher frequencies are more challenging to receive at greater distances than lower frequencies. This reality is why so many long-range communication systems use low frequencies like 433 MHz and 900 MHz.
Higher gain antennas can receive weaker signals and transmit signals farther. The highest gain levels typically come from directional antennas.
To increase range in a wireless link, consider replacing the antennas with high gain antennas. However, be sure to understand your local regulations related to antenna replacement.
Finally, the range of an RF link must be considered in relation to performance requirements.
In many cases, it is simply impossible to achieve high data rates with very long-distance links. You will either have to accept lower data rates or implement hops along the path.
For example, instead of creating one 50-kilometer link, you might create one 15-kilometer link to another 20-kilometer link to another 15-kilometer link.
Such configurations may result in significantly improved performance.
As you will see later in this chapter, LoRaWAN and Sigfox are both IoT protocols that offer multi-kilometer ranges.
Power must be considered from two perspectives: the output power of the RF signal (in mW, W, or dBm) and the power consumed by the device. We are really discussing low-power wireless and wireless device power management.
The phrase low-power wireless is usually a reference to the output power of the wireless solution.
Low-power solutions include:
Output power in low-power wireless solutions is typically in the 1 to 100 mW range with ultra-low-power devices in the 1 to 10 mW range in many cases.
For those systems supporting variable output power, increased power on both ends of the link results in increased link range. Decreased power results in decreased range.
The other perspective of RF and power is power consumption.
When selecting a wireless technology, it is important to consider the power consumption requirements, and this is particularly true when devices will be powered by a battery.
Technologies like NFC and BLE are among the lowest power-consuming solutions.
Technologies like Wi-Fi and Cellular consume much more power.
In a wireless communications device, several components result in power consumption:
When a device has more of these components, it consumes more power.
This fact is why MIMO 802.11 APs require more power than SISO APs. MIMO APs have more radio chains resulting in more consumed power.
This power issue is also why most cellular phones with integrated Wi-Fi do not implement the maximum number of spatial streams that could be supported by the standard and available chipsets.
Power management is very important for battery-powered devices.
Now that you understand some of the fundamental factors related to speed, range, and power, you can begin to explore various wireless solutions that may fit into one or more of the categories of short-range, low-rate, and low-power networks.
The first network type we will explore is the 802.11 Wi-Fi network. This network fits into the short-range and low-rate categories, depending on the Physical Layer (PHY) implemented.
For example, when using the older Direct Sequence Spread Spectrum PHY (802.11-Prime or 802.11-1997), the maximum data rate is 2 Mbps, which is far lower than the possible 1+ Gbps supported in 802.11ac and 802.11ax.
This section will summarize the various PHYs supported in 802.11 networks.
IEEE standards are managed by working groups. For example, there is an 802.3 working group and an 802.11 working group. The working group oversees the creation and maintenance of the standard.
When the initial standard is created, several drafts are generated, and feedback is received and incorporated as needed in the drafting process. When the final draft is ratified (approved by a vote of active members), it becomes a standard.
After a standard exists (i.e., it has been ratified), it must be maintained.
Figure 7.1 illustrates the lifecycle for standards used by the IEEE.
Draft amendments are created and may go through several drafts. When a draft amendment is ratified, it becomes part of the standard.
A ratified amendment may add features to the standard or it may add completely new ways of communicating on the network (known as physical layers [PHYs]).
For example, 802.11n was an amendment that added the High Throughput (HT) PHY to the standard.
The phrase "802.11 as amended" refers to the most recent revision document (currently 802.11-2020) plus any ratified amendments released after the revision.
Therefore, 802.11ax modified 802.11-2020 to add the HE PHY to the standard, and the standard is actually 802.11-2020 plus the changes introduced in the 802.11ax amendment among others.
802.11 wireless devices can operate in one of five primary frequency bands:
Traditional WLAN devices operate in either the 2.4 GHz or 5 GHz frequency bands, with the newest devices also supporting 6 GHz.
These devices include:
Band | Compatible Standards |
---|---|
2.4 GHz | 802.11b, 802.11g, 802.11n, 802.11ax |
5 GHz | 802.11a, 802.11n, 802.11ac, 802.11ax |
6 GHz | 802.11ax (at the time of writing) |
Note: 2.4 GHz-only devices cannot communicate with 5 GHz-only devices.
Therefore, a device operating as a 2.4 GHz-only 802.11n device cannot communicate with a device operating as a 5 GHz 802.11n device, because they communicate with different frequencies.
The frequency ranges specifically used for 2.4 GHz and 5 GHz are as follows:
2.4 GHz uses the range from 2400 MHz (2.4 GHz) to 2500 MHz (2.5 GHz), and the actual usage range for 802.11 channels is from 2.401 GHz to 2.495 GHz.
5 GHz uses the range from 5000 MHz (5 GHz) to 5835 MHz (5.835 GHz), and the actual usage range for 802.11 channels is from 5.170 GHz to 5.835 GHz, though not all areas within this range are used.
Within these ranges, specific portions are defined as channels of 22 MHz for the oldest wireless devices and 20 MHz or some factor of 20 MHz (e.g., 40 MHz, 80 MHz, 160 MHz) for newer devices.
The 2.4 GHz and 5 GHz bands are used differently depending on a region's radio authorities and their adoption of a set of rules within the regulatory domain.
It is important to know which frequencies are available in the regulatory domain where the wireless LAN (WLAN) will be installed.
As a CWISA, you may want to pay attention to this to avoid ordering mishaps that could prevent your customer from taking full advantage of the products you are offering.
The S1G bands are used for both:
The specific frequency ranges used vary based on:
More details about these bands will be provided in the next section of this chapter (802.11 PHYs).
The 60-GHz band, with respect to the 802.11 standard, is used only for the Directional Multi-Gigabit (DMG) PHY and is not covered in detail in this book or the CWISA exam.
However, it is important to know that it is mostly used with:
The DMG PHY was first defined in 802.11ad and is now part of the 802.11-2020 standard.
It is basically an implementation of 802.11ac (VHT) in the 60-GHz band.
The 802.11-2020 standard defines several different physical layers (PHYs) that provide different data rates, channel widths, and operational frequency bands. Additionally, the 802.11ax amendment defines an added PHY, and more PHYs will be added in the future.
The CWISA exam requires that you are aware of the basic features of the PHYs defined in 802.11-2020 and 802.11ax, including the following:
Remember that data rates and throughput are two very different things in wireless networks.
The channel width and modulation used significantly impact the actual data rates that are available. Each PHY supports specific data rates based on the combination of:
Data rates are specific and not continuously variable (e.g., 11 Mbps → 5.5 Mbps → 2 Mbps).
The oldest PHY, still supported by modern 802.11 devices.
Supported by all 802.11 devices operating in the 2.4 GHz band.
Released with the 802.11b amendment in 1999.
LAN administrators can disable backward compatibility by disallowing lower data rates — this is a configuration option, not a radio/device limitation.
With all data rates enabled, newer 2.4 GHz 802.11 devices can communicate with all older devices.
The Orthogonal Frequency Division Multiplexing (OFDM) PHY was the first to support 5 GHz band operations. This PHY was made available through the 802.11a amendment in 1999.
Note: OFDM does not support 1, 2, 5.5, or 11 Mbps rates. It is not backward compatible with DSSS or HR/DSSS.
The Extended Rate PHY (ERP) was introduced to extend OFDM modulation into the 2.4 GHz band. It was defined in the 802.11g amendment.
Note: All devices implementing the ERP PHY also implement DSSS and HR/DSSS PHYs to support backward compatibility.
The High Throughput (HT) PHY was introduced in the 802.11n amendment and offers several advantages over older PHYs.
The Very High Throughput (VHT) PHY was introduced in the 802.11ac amendment.
Note: Wider channels increase data rates but require cleaner RF environments and higher SNR.
The data rate available for a link is constrained by the least capable component of that link.
Example:
Real-world ≠ Marketing claims
Typical Real-World Use:
Note: S1G is ideal for battery-powered, low-bandwidth, long-range IoT devices.
The IEEE 802.15.4 standard defines the foundation for Low-Rate Wireless Personal Area Networks (LR-WPANs). It is the basis for several well-known protocols used in IoT and industrial environments.
(Figure 7.11 illustrates these relationships)
As of IEEE 802.15.4-2015, the standard specifies 18 different PHYs.
Usage/Application | Number of PHYs | Notes |
---|---|---|
Smart Utility Networks (SUN) | 3 | Tailored for energy & utility networks |
Low-Energy, Critical Infrastructure Monitoring (LECIM) | 2 | Used in monitoring critical infrastructure with low power consumption |
Rail Communications and Control (RCC) | 2 | Specialized PHYs for railway environments |
General Purpose | Remaining PHYs | Support varying frequency bands and network types |
Frequency Band | Notes |
---|---|
2450 MHz | Used by CSS PHY specifically |
Sub-GHz Bands (various) | MSK PHY can operate across these, subject to local regulations |
Note: Some PHYs are designed to operate in any frequency band allowed by regional regulations, while others are locked to specific frequencies.
IEEE 802.15.4 provides a flexible framework for low-rate, low-power wireless networks with extensive global frequency support.
PHY Name | Description |
---|---|
O-QPSK | A DSSS PHY operating in the 780, 868, 915, 2380, and 2450 MHz bands. |
BPSK | A DSSS PHY operating in the 868 and 915 MHz bands. |
ASK | A parallel sequence spread spectrum (PSSS) PHY using amplitude shift keying in the 868 and 915 MHz bands. |
CSS | A chirp spread spectrum (CSS) PHY operating in the 2450 MHz band. |
HRP UWB | A burst position modulation (BPM) and BPSK modulation ultra-wideband (UWB) PHY operating in the sub-1 GHz and 3 to 10 GHz bands. |
MPSK | An M-ary phase shift keying (MPSK) modulation PHY operating in the 780 MHz band. |
GFSK | A gaussian frequency shift keying (GFSK) PHY in the 920 MHz band. |
MSK | A minimum shift keying (MSK) modulation PHY. |
LRP UWB | A low rate pulse (LRP) UWB PHY. |
SUN FSK | An FSK modulation PHY in support of SUN applications. |
SUN OFDM | An OFDM modulation PHY in support of SUN applications. |
SUN O-QPSK | The O-QPSK PHY with modifications to support SUN applications. |
LECIM DSSS | A DSSS PHY in support of LECIM applications. |
LECIM FSK | An FSK PHY in support of LECIM applications. |
TVWS-FSK | An FSK PHY operating in the Television Whitespace (TVWS) bands. |
TVWS-OFDM | An OFDM PHY operating in the TVWS bands. |
TVWS-NB-OFDM | A narrow band OFDM PHY operating in the TVWS bands. |
RCC LMR | A land mobile radio (LMR) for use in RCC applications. |
RCC DSSS BPSK | A DSSS BPSK PHY for use in RCC applications. |
Band Reference Name | Band Frequency Range (MHz) |
---|---|
169 MHz | 169.400-169.475 |
433 MHz | 433.05-434.79 |
450 MHz | 450-470 |
470 MHz | 470-510 |
780 MHz | 779-787 |
863 MHz | 863-870 |
868 MHz | 868-868.6 |
896 MHz | 896-901 |
901 MHz | 901-902 |
915 MHz | 902-928 |
917 MHz | 917-923.5 |
920 MHz | 920-928 |
928 MHz | 928-960 |
1427 MHz | 1427-1518 |
2380 MHz | 2360-2400 |
2450 MHz | 2400-2483.5 |
HRP UWB sub-gigahertz | 250-750 |
HRP UWB low band | 3244-4742 |
HRP UWB high band | 5944-10,234 |
LRP UWB | 6289.6-9185.6 |
The most commonly used bands are the 868 MHz, 915 MHz and 2400 MHz bands, in large part because those bands may be used by Zigbee devices. Therefore, the supported modulation methods are DSSS and O-QPSK in those Zigbee solutions as well. The data rate of O-QPSK operating in 2.4 GHz is 250 kbps. The data rate of DSSS operating in 868 MHz is 20 kbps, and it is 40 kbps when operating in 915 MHz.
An 802.15.4 network forms either a star topology network or a peer-to-peer topology network. Figure 7.12 shows these two possible network topologies as defined in the standard.
As you can see, two basic device types participate in an 802.15.4 network: Full Function Device (FFD) and Reduced Function Device (RFD). An FFD can become a Personal Area Network (PAN) Coordinator and an RFD cannot. RFD units use the network for communications, but do not control the network in any way.
Should the PAN Coordinator go off of the network, another FFD unit can take over PAN Coordinator operations. The PAN Coordinator starts the network and, in a star topology, all communications go through it. In a star topology, the PAN coordinator would usually be main powered (plugged into power or powered by Power over Ethernet). Devices not acting as the PAN coordinator may be battery-powered or mains-powered as well.
In a peer-to-peer topology, devices can communicate directly with each other. Generally, the first device communicating on the channel becomes the PAN coordinator, but if this device should leave the network, another FFD may be elected to the role.
Cluster tree networks are also supported. Such networks are built by interconnections among multiple PANs such that communications can be routed from a device in one PAN to a device in another PAN. This structure allows for coverage of much larger areas. Figure 7.13 shows an example of a cluster tree network. The cluster tree is built when the PAN coordinator in the first cluster (PAN) instructs another FFD to become the PAN coordinator of a new cluster adjacent to the existing one. This pattern can repeat until several clusters exist. Devices may then join their nearest cluster.
Very few devices exist on the market that are simply called 802.15.4 devices. In most cases, they either implement higher layer standard protocols like Zigbee or 6LoWPAN or proprietary protocols like MiWi. However, at the radio level, these systems are using 802.15.4 communications.
Early on, Bluetooth was a peripheral connectivity solution. You could connect mice, keyboards, audio headsets, headphones, speakers and other such devices.
Enhancements have been made to BLE over the years to the point where it can function as an IoT connectivity solution and provide links that, in some cases, have spanned 2 Kilometers.
Several Bluetooth enhancements have provided modern applications of this technology, including:
First introduced in 2010 with Bluetooth 4, this was the foundation that eventually led to beacons and other technologies.
Bluetooth 5 enhanced BLE to provide twice the speed and four times the range. Before Bluetooth 5, the maximum speed was 1 Mbps, but version 5 added a 2 Mbps PHY as well. The new coded PHY was also added with Bluetooth 5 providing four times the range. You cannot get the extended range at the same time as the highest data rates, but both options are available in an either/or implementation option.
BLE beacons have been in Bluetooth for nearly a decade and are the functional foundation of iBeacons and other beaconing technologies used mostly for locationing.
The best location systems use trilateration to locate the target, which means that beacons are read from multiple Bluetooth sensors to calculate the location. This locationing method could accomplish accuracy to between 1 and 2 meters.
Accuracy levels may still not be to the level desired and so the Bluetooth Special Interest Group (SIG) has added Direction Finding to Bluetooth 5.1.
New in Bluetooth 5.1, direction finding will slowly make its way into production devices and our networks.
Direction Finding uses Angle of Arrival (AoA) and Angle of Departure (AoD) for greater accuracy in location tracking and even movement tracking. In some environments, Bluetooth 5.1 devices can be located to within 10 centimeters with 86% accuracy.
Bluetooth mesh implements a many-to-many network that can scale from tens to thousands of devices communicating with one another.
Mesh capabilities were introduced in Bluetooth 4.0 and are part of the core specification, so most of the newer Bluetooth devices can support it (given that Bluetooth 4 was finalized in 2010). However, to support it, some devices may require upgraded firmware or software stacks.
Bluetooth mesh is a networking technology layered over Bluetooth core communications. It uses BLE for communications and this is why field devices may be upgraded to Bluetooth mesh if they have sufficient processing power and memory.
Bluetooth effectively supports three topologies today:
A point-to-point topology is the oldest solution in Bluetooth and is what is used by peripherals for connectivity. It can be used for data transfer as well, with speeds up to 3 Mbps.
The broadcast topology is part of BLE and provides the beaconing and advertisement features for applications that provide information to user devices and notifications as well.
The mesh topology is the modern game-changer in Bluetooth and may well introduce new opportunities for entrance into the IoT market.
Table 7.4 summarizes important information about different implementations of Bluetooth.
Specification | Classic Bluetooth | Bluetooth Low Energy (BLE) | BLE Long Range |
---|---|---|---|
Range | 100 meters | 100 meters | 400 meters |
Max Range (Free Space) | 100 meters | 100 meters | 1000 meters |
Data Rate | 1-3 Mbps | 1 Mbps | 2 Mbps |
Application Throughput | 0.7-2.1 Mbps | Up to 305 kbps | Up to 1.36 Mbps |
Topologies | Point-to-Point | Point-to-Point, Broadcast | Point-to-Point, Broadcast, Mesh |
LoRa and LoRaWAN are not necessarily synonymous. LoRa is the base on which LoRaWAN is built. LoRa provides the machine-to-machine communications and LoRaWAN builds the network at large. LoRaWAN fits into the Low-Power WAN (LPWAN) category (like NB-IoT and LTE-M). The LoRa Alliance promotes and evolves the LoRaWAN open standard.
A LoRaWAN network is comprised of gateways that communicate using IP connections with the rest of the non-LoRa network and that communicate using single-hop LoRa or FSK communications with end-devices. All communications between end-devices use LoRa (with CSS modulation) or FSK on the radio channel until they reach a gateway, which may require multiple single-hops, where the communication is encapsulated as IP to the server or cloud managing the devices.
LoRaWAN functionality is defined in classes. All LoRaWAN devices must be able to perform Class A functionality at a minimum. This provides a baseline whereby all LoRaWAN devices will be able to perform at least minimal communications with all other LoRaWAN devices.
The difference between the defined classes as of the 1.0.3 specification from 2018 are:
Class A: All devices support bi-directional communications such that an uplink transmission from an end-device is always followed by two short downlink receive windows. Class A devices cannot receive communications from the network (downlink) except for the time during the receive windows immediately after an uplink. Downlink communications from the server will have to wait for the next uplink. These devices consume the least power.
Class B: These devices provide more receive windows. The Class A receive process is still supported, but Class B devices open additional receive windows at scheduled times. A time synchronization beacon is sent from the gateway to provide the scheduling for Class B devices. These devices consume moderate power.
Class C: These devices have open receive windows. They are unable to receive when transmitting, but other than that time, they can receive at any time. These devices consume the most power.
These device classes are illustrated and summarized in Figure 7.14.
The LoRa single-hop radio modulation results in data rates from 0.3 kbps to 50 kbps, fitting LoRa nicely into the low-rate wireless category. LoRaWAN is commonly used for IoT devices, such as sensors, that require only the transmission and reception of small amounts of information and those transmissions may be infrequent in many scenarios.
The radio side of a LoRaWAN (LoRa) operates in different frequency bands depending on the regulatory domain. These bands include 430 MHz, 433 MHz, 868 MHz, and 915 MHz in common use.
Figure 7.15 illustrates the LoRaWAN architecture with example use cases (on the left).
You may note that the image in Figure 7.15 references a 3G backhaul. Keep in mind that LoRaWAN communications come into the LoRaWAN gateways at a maximum of 50 kbps. Even with several dozen such communications incoming, they can be forwarded across even a 3G backhaul without problems. Of course, today, if a cellular backhaul is used, it is more likely to be LTE/4G or 5G.
In summary, LoRa offers long-range connectivity, a potential 10-20-year battery lifetime, and minimal infrastructure costs. These features make it very appealing where low-rate communications are required.
Because LoRaWAN gateways operate in unlicensed bands, you can establish a base station in an area and many devices, in a radius of around 5-7 kilometers or more, can connect to the base station in many cases. With the low-rate communications from sensors and other IoT devices, a simple 10 Mbps Internet connection can serve several dozen end-devices easily.
Some areas have public network coverage through The Things Network. The network consisted of 20,600 gateways at the time of writing. This is a growth of 2.5 times since 2019.
Sigfox is a long-range protocol that provides long battery life (because devices communicate only a few minutes per day), low device cost, spectral efficiency (because devices rarely communicate and very narrowband communications are used), and low connectivity fees (all devices connect to the Sigfox network and not a private gateway).
With Sigfox, end devices transmit to Sigfox base stations that are connected to the Sigfox cloud. You then connect to the cloud with your applications to retrieve data from the end devices. You could say that the Sigfox cloud acts as your large data broker; however, it offers push as well as pull technology.
Each message sent to the network must be 12 bytes or less. But a lot of information can be contained in 12 bytes. When used strategically, you can send a message once every two hours and include 12 bytes of encoded data representing many readings from a sensor. However, you can transmit up to 140 uplink messages per day. So, 140 times within a 24-hour day means that you can send a message about once every 6 minutes or just under 6 times per hour, we'll say 5 times per hour to be well within the rules.
That's once every 12 minutes that a 12-byte message can be sent in our model and still comply with the Sigfox rules. If the reading from a sensor can be stored in one byte (not all can) then we can transmit the past 12 readings once every 12 minutes and collect sufficient data from the sensor. This is just one example, but you can see that 140 12-byte messages per day can work well for many use cases.
The downlink (messages to the end devices) is limited to 48-byte messages per day. This could be used to send configuration directives to the device. Firmware over-the-air is not really an option for Sigfox end devices. Be sure the device is well-tested and stable before deployment as you will not want to update it for a long time, barring an extreme situation that must be addressed (such as a serious security flaw or a serious operational failure).
Now, imagine another configuration model. We can deploy end devices that transmit only once every two hours. That would leave us with 128 transmissions unused. Next, we can configure the device so that we can send a message instructing it to report once each minute for some period. The device performs this more granular monitoring and reporting for that period and returns to normal operations.
Such a use case would allow us to toggle on and off more rapid reports when required of individual devices. For example, if the payload sent to the device is 10001111
, it could mean to report more frequently (the high order bit is set to 1 instead of 0) and to do so for the next 16 minutes (the last four bits have values from 0-15 so we increment by 1 to get the number of minutes).
If the first bit is set to 1, it is a command to report for the included number of minutes on a per-minute basis, up to a maximum of 128 minutes. If the first bit is set to 0, it could inform the device that a configuration command follows, of which, 128 commands could be issued (7 bits).
Did you catch anything in that last paragraph? That's right. We were able to do the model in 8 bits rather than 8 bytes. In actuality, the downlink messages can have eight times as much data in them as we represented in the model. Hopefully, this illustrates how that, with a bit of creativity (pun intended), you can implement significant data transfer in just a few bytes.
Sigfox devices transmit within an operational band. The devices can transmit anywhere in that band and the base stations detect the transmission. The transmissions are sent three times on three different frequencies within the band to ensure receptivity. This operational band is from 862 to 928 MHz.
Of course, within a regulatory domain, the transmissions must be limited to the license-free portions of that range. In fact, specific ranges are defined as RC1 through RC6. They are as follows:
RC | Frequency Range (MHz) | Max Power |
---|---|---|
RC1 | 868-878.6 | 16 dBm |
RC2 | 902.1375-904.6625 | 24 dBm |
RC3 | 922.3-923.5 | 16 dBm |
RC4 | 920.1375-922.6625 | 24 dBm |
RC5 | 922-923.4 | 14 dBm |
RC6 | 865-867 | 16 dBm |
For example, RC1 addresses much of Europe (UK, Switzerland, UAE, France, Belgium, Austria, Denmark, Spain, etc.) and RC2 covers most of the western hemisphere (Brazil, Canada, Mexico, United States). Therefore, in most of Europe, Sigfox operates in the 800 MHz band and, in most of the western hemisphere, Sigfox operates in the 900 MHz band, for simplified reference.
At the time of writing in 2022, much of Europe is rolled out and significant parts of the globe are rolling out, including Canada, Brazil, Mexico, India, and Australia. No significant rollout of Sigfox has occurred in the United States.
Sigfox does provide an SDR dongle that can perform two functions related to Sigfox. First, it can be used to emulate the Sigfox network during development. The dongle, coupled with emulator software, allows you to build and test Sigfox end devices without initial subscription to the Sigfox service. The SDR can also be used as an analyzer to evaluate Sigfox transmissions. Software is provided for this as well.
The Zigbee Alliance was created to "enable reliable, cost-effective, low-power, wirelessly networked, monitoring and control products based on an open global standard" and was later rebranded as the Connectivity Standards Alliance (CSA).
The Zigbee specification (currently, Zigbee Pro 3.0 in common production) defines the network, security, and application layers that reside above the PHY and MAC layers of the 802.15.4 standard for monitoring and control devices. This specification is used as an embedded technology in many consumer, health care, commercial, and industrial devices.
According to the Zigbee Pro 3.0 specification:
The Zigbee network layer (NWK) supports star, tree, and mesh topologies. In a star topology, the network is controlled by one single device called the Zigbee coordinator. The Zigbee coordinator is responsible for initiating and maintaining the devices on the network. All other devices, known as end devices, directly communicate with the Zigbee coordinator.
In mesh and tree topologies, the Zigbee coordinator is responsible for starting the network and for choosing certain key network parameters, but the network may be extended through the use of Zigbee routers. In tree networks, routers move data and control messages through the network using a hierarchical routing strategy. Tree networks may employ beacon-oriented communication as described in the IEEE 802.15.4 specification. Mesh networks allow full peer-to-peer communication.
While Zigbee operates in multiple frequency bands, including 868 MHz, 915 MHz, and 2.4 GHz, it is most commonly used in the 2.4 GHz band as this range is available worldwide. Manufacturers can more easily support their devices operating in a single band. Zigbee uses a 2 MHz wide channel, and a total of 16 channels are available in 2.4 GHz. The channels are separated by 5 MHz. The maximum data rate for communications is 250 kbps.
Zigbee fits into the short-range, low-power, low-rate categories. Indoors, the range is between 75-100 meters with outdoor line-of-site ranges up to roughly 300 meters.
Many other specifications are useful in relation to Zigbee devices, and most of these are provided on the next page. Note the Zigbee Green Power Devices. These are devices that use energy harvesting for power. Rather than leaving energy harvesting as an afterthought, which most IoT-type protocols do, the CSA chose to certify devices for energy harvesting to help promote its use. Use of energy harvesting both eases deployment and results in far less energy consumption from the power grids of the world.
Additionally, DSR corporation has led a collaboration over the past few years that developed the ZBOSS 2.1 and 3.0 specifications. ZBOSS is a software protocol stack for Zigbee that is portable across hardware platforms. It can implement a Zigbee Coordinator, Zigbee Router, or Zigbee end device. So there continues to be movement to keep Zigbee moving forward.
IPv6 over Low Power Wireless Personal Area Networks (6LoWPAN) allows for the use of IPv6 over 802.15.4 networks. The Thread Group (www.ThreadGroup.org) manages a specification for the use of 6LoWPAN on 802.15.4 devices. 6LoWPAN is based on RFC 4944, 6282, and 6775, which are both based on 802.15.4. Thread is a layer above 6LoWPAN that provides for IP routing, UDP communications, security, and commissioning.
Thread is designed to address the unique interoperability, security, power, and architecture challenges of the IoT.
Thread is a low-power wireless mesh networking protocol, based on the universally-supported Internet Protocol (IP), and built using open and proven standards.
Thread enables device-to-device and device-to-cloud communications and reliably connects hundreds (or thousands) of products and includes mandatory security features.
Thread networks have no single point of failure, can self-heal and reconfigure when a device is added or removed, and are simple to setup and use.
Thread is based on the broadly supported IEEE 802.15.4 radio standard, which is designed from the ground up for extremely low power consumption and low latency.
Thread addresses this challenge by providing a certification program that validates a device's conformance to the specification as well as its interoperability against a blended network comprised of multiple certified stacks.
Time will tell how this technology is adopted. As a recent entrant into the IoT space and with much existing competition, we will see. However, it does offer the ability to perform IPv6 communications on sensor networks and other low-power device networks. So, this advantage may play in its favor.
LTE-M was the first cellular network protocol designed for IoT-type solutions. NB-IoT came after LTE-M. Both are available at this time in various regions.
LTE-M (LTE Machine Type Communication) is an LPWAN technology effective for use with IoT and providing extended coverage. LTE-M provides for battery lifetimes of up to 10 years and more in some scenarios.
When compared to NB-IoT, LTE-M has a wider bandwidth and supports higher data rates.
NB-IoT, while supporting lower data rates, offers a battery life of more than 10 years because of the narrower bandwidth and lower data usage. In the end, it is a tradeoff between higher data rates (LTE-M) and extended battery life (NB-IoT).
Many industry professionals predict that NB-IoT will be the cellular IoT network of choice in the future rather than LTE-M. In many cases, a choice will not be required. If a service provider offers both LTE-M and NB-IoT, an organization can contract for both services and use the preferred service for each device.
In April of 2019, it was announced that AT&T had enabled its NB-IoT network. It launched an LTE-M network in May of 2017. So, this is an example of a service provider with both networks available.
In this chapter, you learned about the important factors related to range, rates, and power. You then explored several different wireless solutions that can fit into one or more of the low-rate, short-range, and low-power categories.
In the next chapter, you will explore the details of IoT hardware and important considerations for deploying custom-built solutions.
Objectives Covered:
While previous chapters have briefly addressed the hardware components in IoT devices, this chapter is dedicated to the exploration of those components. As a wireless IoT administrator, it is essential that you understand the hardware operating on your network. For this reason, this chapter will provide the details of end devices and gateways and the components that are in them. This includes processors, memory, storage, radios, and more.
When selecting IoT end devices, the administrator has two basic options: off-the-shelf and custom devices. In many cases, off-the-shelf devices will be available from vendors that meet the requirements of the scenario. In other cases, custom devices must be built.
Generally speaking, in cases when a single device must perform multiple sensing actions, it is more likely that a custom device will be required. Additionally, when a very unique scenario is in play, custom devices are more likely to be needed.
When a simple device that performs a very common single sensing function is needed, off-the-shelf devices may fill the need. An additional area where custom devices are often required is in the area of actuation. While many off-the-shelf sensors are available, far fewer actuators that meet a specific need are available.
This section will explore the many options.
The majority of non-consumer, off-the-shelf IoT devices are sensors used to detect environmental or machine conditions. To perform actuation, it is common to require a separate interface to control a machine or system. Therefore, even with off-the-shelf components, planning is still an important role.
The problem with actuation is that it is unique to each piece of machinery or system you wish to control. If the machine vendor does not provide an actuation interface for remote control, a solution will have to be created from scratch. In some cases, it will not be cost feasible to do so. In other cases, it will be worth the cost and the effort. An analysis of what is required to interface with the machine will be required.
To make all this clear, the act of performing actuation requires physical movement in the real world or electrical (either digital or analog) control of an existing system. The latter may be a Heating, Ventilation, and Air Conditioning (HVAC) system, a lighting system, or a building management system, for example. The former, those requiring movement in the real world, are the machines used in manufacturing, the pipes used in oil and gas, and other real-world machines and components.
Take the pipes for oil and gas as an example. If you want to increase or decrease the flow within the pipes, you will either have to increase or decrease the pump pressure at some point or open or close valves to control the flow. Changing the pump pressure, unless the pump has a digital interface, will likely require turning some control knob or changing some switches. Changing the valve status requires movement as well.
Absolutely. For example, Figure 8.1 shows an actuator that can push a button or switch. However, notice that the actuator, while an IoT device itself, must be positioned appropriately to push the correct button. It must fit in the physical space required. It must provide the necessary force to take the action.
There are a few requirements that must be considered when implementing such an actuator. It is an off-the-shelf actuator (that happens to use Bluetooth), but it is not integrated directly with a sensor that determines the condition when the button should be pushed. That is a separate function that you must implement.
Therefore, to implement off-the-shelf actuators, in most cases, you must implement a sensor to detect the appropriate metric and an actuator to take the appropriate action. Given that they are not often designed as a system, you will have to provide the application logic to integrate them. If you want them integrated into a larger system, the sensor will communicate through the wireless IoT network to a central control system and the control system will, in turn, send appropriate commands to the actuator.
Several vendors provide dozens of such sensors that use various protocols for network communications. Examples of such vendors include:
Many more vendors exist, and some specialize in interfacing traditional industrial automation systems with enterprise networks. They provide gateways that interconnect the industrial automation networks with traditional Ethernet or Wi-Fi networks. Some also provide interfaces to directly connect legacy industrial sensors to modern wireless IoT protocol networks.
Figure 8.2 shows the Dragino Temperature & Humidity sensors. This sensor, model LHT65N, is a LoRaWAN-ready sensor. Simply place it where you want to monitor temperature and/or humidity and connect it to the LoRaWAN network to begin monitoring the environment. It can be configured via LoRaWAN downlink; however, firmware updates are performed locally. The temperature range is from -40 to 80 degrees Celsius. While not in the printed label, the device also includes an illuminance sensor to detect the level of light in the area.
This is an ALTA dry contact sensor. Dry Contact sensors connect to items ranging from windows and doors to forklift seats to detect when contact plates are connected. When current is flowing, the plates are connected. When it is not flowing, they are not. Many uses of such sensors can be implemented. This particular sensor happens to be designed according to the IECEx specifications, which define the design of components for use in explosive atmospheres.
Custom devices must be designed, tested, and deployed by assembling the right combination of components referenced in the ensuing sections of this chapter. It begins with thorough requirements engineering to ensure that the eventual designed and built solution will perform as needed. The CWIDP exam and learning materials provide extensive detail on requirements engineering. The CWISA must ensure that such requirements are implemented when deploying a solution.
For example, if the requirements indicate that a solution must operate on battery power for a period of five years or more without replacing the batteries, the CWISA should ensure that appropriate batteries are used, that the IoT device itself consumes as little power as required to do the job, and that energy harvesting is implemented (if required to meet the specifications). It's not as simple as buying a sensor and installing it.
When building custom devices, you have the advantage of implementing whatever is required in the device. You can implement multiple sensors and actuators in a single device. For example, a custom device could be built that would sense the temperature of a machine and reduce its operational speed if it gets too hot. But, taking it further, you could also implement a vibration sensor in the same device to use the vibration metrics to determine if the machine is operating properly as well. The point is that, with custom devices, you can put what you need in the device.
The downside to custom devices is that they are often implemented without good documentation. Then, when the implementer leaves the organization, it is up to their replacement to discover how the device works and how to support the device. The solution to this, of course, is good documentation.
When creating a custom device, at a minimum, the following should be documented:
Three levels of consideration may be made for the starting point of building a custom IoT devices: chip-level, controller board-level, and computer board-level. This section will address these three possible starting points.
Starting with a chip requires the most development time and effort. A chip, even a System-on-a-Chip (SoC) or microcontroller chip that has many internal components (as you'll learn later), requires the fabrication of an appropriate printed circuit board (PCB) to provide connections between the chip and various peripheral interfaces and connectivity interfaces. The chips used in IoT are often more than just a processor and may include components like ROM, flash memory, Universal Asynchronous Receiver/Transmitter (UART) interfaces, and more.
Figure 8.4 shows the Raspberry Pi RP2040 chip. This is categorized as a microcontroller chip and it includes a dual-core Arm Cortex-M0+ processor that runs at 133 MHz (maximum). It also includes 264 kB of on-chip SRAM and several peripheral interfaces including:
The next level up in the custom build options is the controller board. A controller board includes the chip and a predesigned PCB with connection interfaces for the features of the chip and any additional components added to the controller board. However, the controller board will not typically have interfaces for such components as display, keyboards, mice, etc. If such interfaces are desired, they will have to be custom built, or the controller board will have to interconnect with a computer board.
In the latter case, the computer board acts as the interface between the controller board and the rest of the world (in most cases).
Figure 8.5 shows the Maker Pi RP2040, which is a robot controller board based on the previously referenced RP2040 chip. It includes seven Grove ports, which are used to connect to Grove sensors. It supports up to four servo motors and two DC motors for actuation control.
As you can see in the image, it expands the chip introduced in Figure 8.4 to provide connections from that chip to the many interfaces required to gain real-world interaction. However, at the same time, it may have many features that your solution does not require and, therefore, may not be as good as starting from a chip for a mass rollout.
It is important to insert another reality check here as we are discussing custom-built solutions using chips, controller boards, and computer boards. When building custom solutions, it is essential that you verify the availability of the components both now and as long as you need them.
If a project will take 36 months to rollout, you must ensure that the components are available during that entire timeframe. You can either:
Obviously, the safer option is to acquire all parts up front.
At the same time, you must consider the life of the solution.
The only options are:
As hinted at previously, computer boards are just what they sound like: boards that include a processor, memory, possibly networking chips and interfaces, and display and peripheral connectors.
The types used for custom IoT development are usually less powerful than most modern desktops and laptops, with the exception of those scenarios where extreme processing power is required at the edge.
A full-featured computer with a quad-core processor running at 3+ GHz and 64 GB of RAM may be used at the edge for performing advanced machine learning based on sensed data. Several less powerful IoT devices nearby may communicate with this powerful edge device for near-on-device processing capabilities. But these scenarios are less common.
An example of a computer board that may be used in IoT, continuing with the previous thread, is the Raspberry Pi. These computer boards are often also called development boards or single-board computers (SBCs).
The Raspberry Pi has enjoyed great popularity in the IoT prototyping process as well as in production devices. In 2021 and 2022, the units became difficult to acquire, but the manufacturer continues to ensure that they will continue production.
This supply chain shortage illustrates one of the concerns addressed earlier: the availability of components when you need them.
Another example is the Intel Edison. This single-board computer was quite popular for IoT prototype and custom production IoT solutions, but it is no longer manufactured. It was manufactured for just three and a half years.
Even with the reality of end-of-life products, one significant advantage in using an SBC is that software can be ported to various SBCs more easily than that which is written specifically for a controller board.
If the operating system, often Linux, runs on one SBC and can be run on another, porting the software becomes easier. Additionally, if communications with sensors and/or actuators occurs through standard interfaces like UARTs and GPIOs, modifying the code from one SBC to another is possible.
Processors range from microprocessors to microcontrollers. This section will provide an overview of processors and considerations that must be made when selecting them for IoT end devices. Of great importance is the balance between processing power and energy consumption.
Processors come in many types, sizes, and capabilities. Some are square and others are rectangular in shape. The older and slower processors are often rectangular, but they also consume significant power.
One of the great achievements of the past two decades has been the continued reduction in energy consumption while maintaining acceptable levels of processing speed.
General Purpose Processors
Include microprocessors, CPUs, and System-on-a-Chip (SoC) implementations that have the flexibility of doing most any calculation required. They vary in processing speeds, cache memory, and features.
Special Purpose Processors
Typically have much lower processing speeds (typically less than 500 MHz) but have integrated flash and may include wireless communication capabilities.
A System-on-a-Chip (SoC) design incorporates:
It is common for SoCs to use dual- and quad-core processors. ARM Cortex processors are common today.
Memory can range from:
They may use:
I/O often includes:
They may even include Digital Signal Processors (DSPs) for internal audio processing.
One of the benefits of the SoC design is that it is often engineered to consume as little energy as possible, given that they are often used in mobile phones and tablets. This makes them ideal for IoT end devices that require more features and processing speed.
Another benefit is the incorporated storage, which may be sufficient for local storage in IoT devices.
Using a traditional CPU will require a board that provides all the interfaces to memory and storage as well as peripherals. If used with a "starting from a chip" build, it will require much more effort than an SoC.
The difference between microcontrollers and SoCs can be blurred in many cases. However, in general:
Processors are rated in megahertz (MHz) and gigahertz (GHz):
However, the real-world "speed" at which a processor achieves end results can vary based on:
These differences exist even at similar clock speeds.
Target | Recommended Processor Speed |
---|---|
Simple Sensing | Very Low-Speed Processor |
Sensing + Light Actuation | Moderate-Speed Processor |
Sensing + Actuation + Edge AI | High-Speed Processor |
The energy required by a processor must be considered in the context of:
Many IoT devices are poorly designed, staying active all the time and consuming unnecessary energy. Proper design should allow the device to:
Microcontrollers often offer all-in-one features providing almost everything required to implement a processing system — except the physical connections to the outside world. They differ from SoCs mainly in processing speed.
Component | Purpose |
---|---|
ROM (Read Only Memory) | Non-volatile storage for firmware and essential code |
Flash Memory | Reprogrammable storage, sometimes used for code or data |
SRAM (Static RAM) | Volatile memory for active data storage |
ADC (Analog to Digital Converter) | Converts analog signals (e.g., from sensors) to digital data |
DAC (Digital to Analog Converter) | Converts digital signals to analog (e.g., audio output) |
RTC (Real-Time Clock) | Low-power clock functions like timekeeping |
Interface | Function |
---|---|
GPIO (General Purpose Input/Output) | Controls external devices & reads sensor data |
I2C (Inter-Integrated Circuit) | Master-slave bus protocol for peripherals |
UART (Universal Asynchronous Receiver/Transmitter) | Asynchronous two-wire communication (Tx/Rx) |
SPI (Serial Peripheral Interface) | 4-wire communication, supports multiple peripherals |
Pins | Usage |
---|---|
9 Pins | SPI Communication |
2 Pins | I2C Communication |
2 Pins | UART Communication |
8 Pins | Ground (GND) |
2 Pins | 3.3V Power |
2 Pins | 5V Power |
Remaining | GPIO for raw signals |
Microcontrollers, while compact and highly integrated, are built to efficiently handle specific tasks with optimized power consumption and interface options for external components.
Memory in IoT devices is categorized based on interface, speed, and standard. Memory and storage are often confused, but in this context:
Memory Type | Characteristics | Volatile/Non-Volatile |
---|---|---|
RAM (SRAM/DRAM) | Working memory, stores data/code used by processor. SRAM is faster but expensive. DRAM is slower but consumes less power. | Volatile |
ROM | Stores firmware/BIOS, not writable under normal operations, updatable with special tools. | Non-Volatile |
EEPROM | Similar to ROM but rewritable, used for storing configuration data. Slower than ROM. | Non-Volatile |
Non-volatile storage outside traditional memory categories:
Storage Type | Characteristics | Usage in IoT |
---|---|---|
Spinning Drives | Traditional hard drives, large volume, high power consumption | Rare in IoT |
SSD/NVME | Flash-based storage, NVME faster & more energy efficient | Rare in IoT end devices, more common in edge computing |
Flash Storage (USB/SD) | Solid-state storage, common for IoT, lower data rates than SSD/NVME | Most common local storage in IoT |
Interface | Max Data Rate |
---|---|
MicroSD/SD | 150 - 800 Mbps |
USB 3.2 Gen 2x2 | Up to 20 Gbps |
USB 3.1 | Up to 10 Gbps |
USB 3.0 | Up to 5 Gbps |
USB 2.0 | Up to 480 Mbps |
USB 1.1 | Up to 12 Mbps |
USB 1.0 | Up to 1.5 Mbps |
IoT devices may connect via:
Note: In wireless IoT, the primary focus is ensuring the device supports appropriate wireless connectivity for the deployment scenario.
Protocol | Description | Frequency Band |
---|---|---|
802.11 Wi-Fi | IEEE standard for Wi-Fi communications worldwide, supports sub-1 GHz, 2.4 GHz, 5 GHz, 6 GHz, and 60 GHz bands. Most common wireless access technology. | Sub-1 GHz, 2.4 GHz, 5 GHz, 6 GHz, 60 GHz |
802.15.4 | IEEE Standard for Low-Rate Wireless Networks (LR-WAN/WLAN), foundation for several IoT protocols like Zigbee, 6LoWPAN, etc. | Multiple (sub-1 GHz, 2.4 GHz) |
6LoWPAN | IPv6 over low-rate wireless networks, primarily on 802.15.4, adds IPv6 support to small frame networks. Supports mesh networking. | Dependent on 802.15.4 |
Zigbee | Based on 802.15.4, implements a full stack, managed by CSA (formerly Zigbee Alliance). Primarily in 2.4 GHz band. | 2.4 GHz (historically sub-1 GHz possible) |
Thread | Based on 802.15.4 and 6LoWPAN, uses mesh architecture with leaders and children nodes. | 2.4 GHz |
WirelessHART | Based on HART protocol, uses 802.15.4 with superframes for contention-free and contention-based transmission. | 2.4 GHz |
ISA100.11a | Uses 802.15.4, implements IPv6 header compression techniques for 6LoWPAN compatibility. | 2.4 GHz |
Bluetooth | Managed by Bluetooth SIG, popular for IoT due to BLE, used for beaconing, locationing, and communication. | 2.4 GHz |
Sigfox | Proprietary long-range, low-rate protocol with very limited message transmission per day. Public network dependent. | 868-923 MHz (regional dependent) |
LoRaWAN | Long-range, low-rate protocol, deployable privately unlike Sigfox, requires LoRaWAN infrastructure components. | 433 MHz, 868 MHz, 915 MHz |
Survey by OnWorld (104 plant managers, process integrators, system engineers):
Factor | Importance (%) |
---|---|
Data Reliability | 99.5% |
Standards Compliance | 82% |
Ease of Use | 81.4% |
Security | 80.3% |
Long Battery Life | 77.4% |
Low Cost | 74.1% |
IP Addressability | 69.6% |
Single Plant Network Integration | 65.6% |
Over the past decade, IP addressability has likely become even more critical when selecting and implementing industrial IoT equipment. According to OnWorld's later research, most industrial IoT devices are:
All these are standards-based protocols — not proprietary — reflecting the industry's demand for open, interoperable solutions.
For detailed radio architecture (amplifiers, filters, etc.) → See Chapter 6.
When building custom IoT hardware, converting sensor signals to usable digital data is key.
Sensor Value = (Digital Value * (Sensor Range / 2ⁿ))
Where:
Digital Value
= Value after ADC conversionSensor Range
= Range of sensor (e.g., 0-100)n
= Number of bits in the ADC (e.g., 7-bit, 16-bit)Max Digital Value = 2⁷ = 128
If Digital Value = 128:
Sensor Value = (128 * (100 / 128)) = 100
If Digital Value = 104:
Sensor Value = (104 * (100 / 128)) = 81.25
Max Digital Value = 2¹⁶ = 65,536
If Digital Value = 65,536:
Sensor Value = (65536 * (100 / 65536)) = 100
If Digital Value = 57,189:
Sensor Value = (57189 * (100 / 65536)) ≈ 87.26
Sensor Type | Digital Representation | Example Use |
---|---|---|
Analog Sensors | Multi-bit (depends on ADC) | Temperature, Strain, Pressure |
Binary Sensors | 1-bit (0 or 1) | Motion, Contact, Open/Close |
Actuator Type | Description | Applications |
---|---|---|
Electric Motor | Converts electrical to rotational motion | Pumps, Fans, Conveyors |
Servo Motor | Provides controlled rotational motion | Robotics, Automation |
Linear Motor | Converts electrical to linear motion | Conveyors, Robotic Arms |
Stepper Motor | Precise, discrete rotational motion | 3D Printers, Robotics |
Piezoelectric Actuator | Converts electrical energy to precise deformation | Micro-mechanics |
Electrothermal Actuator | Converts electrical energy to thermal expansion | Valves, Switches |
Voice Coil Actuator | Converts electrical to linear motion | Speakers, Valves |
Electromagnetic Brake | Converts electrical to braking force | Motion Control |
Electrorheological Brake | Uses fluid properties for braking | Motion Control |
Electrohydraulic Actuator | Electrical to hydraulic pressure | Heavy Machinery |
Solenoid | Electrical to linear motion | Valves, Switches |
Pneumatic Cylinder | Compressed air to linear motion | Valves, Doors |
Hydraulic Cylinder | Hydraulic pressure to linear motion | Excavators, Heavy Machinery |
Electromechanical Relay | Electrical to mechanical motion | Switches, Valves |
Software Type | Description | Purpose |
---|---|---|
Firmware | Stored in ROM; initializes device hardware | Boot process, hardware validation |
Operating System (OS) | Runs after firmware; manages hardware & drivers | Provides API for applications & hardware interaction |
Operating systems and firmware are types of software used in the building of IoT devices. There is a difference between the two. Firmware is the software, typically stored in ROM, used to initially start the device, validate hardware, and begin the boot process. In desktop computers, we call the firmware BIOS. In IoT devices, it is usually just called firmware. Operating systems run after and from the firmware and provide the basic interface to the hardware through operating system components and drivers. This section will explore these two solutions in more detail.
Firmware is often ignored in computer science courses. Students will learn about operating systems, hardware, programming languages, and maybe even networking, but firmware will be, at most, defined but no more. Yet the firmware is there before the operating system and applications. The firmware gets everything started.
In desktop computers, we traditionally use the term BIOS for the firmware, but it is firmware just the same. In fact, newer systems use UEFI, which is the Unified Extendible Firmware Interface. UEFI has mostly replaced the older Basic Input/Output System (BIOS) firmware that was used for decades before it. UEFI supports newer hardware than BIOS, including larger drives and security chips. It can typically be more customized than traditional BIOS and uses a standardized interface for firmware interaction.
If you've built a desktop computer, you've had to "enter the BIOS" and perform some configuration. For example, you may have needed to enable virtualization extensions, set the boot priority, or configure a power-on password. All of these are interactions with the BIOS or UEFI. The fact that you can press a keystroke to enter the BIOS before the operating system starts tells you that it is there doing something before the system enters the operating system startup for a brief period of time. So, if you've worked with a computer's BIOS or UEFI, you've worked with firmware.
Additionally, it is clear that there is a difference between firmware and the operating system in that you can take a newly built computer on which no operating system has been installed (or that even has no storage drives connected) and power it on and enter the BIOS.
When it comes to IoT devices, the same is typically true. A firmware typically exists on the device that handles the initial power-on state, possibly checking hardware and preparing to hand things over to the operating system boot loader. Firmware can be defined as, "An essential piece of code that is responsible for performing the underlying hardware initialization prior to handing over to the operating system."
In addition to the main firmware in the device, modules and add-on boards may have their own firmware. The firmware in these extra components often provides the only interaction with the component through system calls to the firmware from the main operating system. At times, due to bugs, security issues, or lack of features, the firmware in these extra components may require updating just as the main system firmware does.
To simplify, firmware can be considered the lowest layer of software in any system. It is the first software to interact with the hardware and initial power-on or access. Most firmware is written in the C language and compiled for the target architecture. Some assembly language code may be used for low-level operations unique to the hardware as well. The firmware is typically stored in non-volatile ROM that can be flashed or updated using special software tools.
Firmware may require updating to counter security flaws. For example, the same vulnerabilities that may be discovered in operating systems and applications may be found in firmware, such as buffer overflows, heap overflows, integer overflows, and stack overflows. Pretty much anything written in C or similar languages can be vulnerable to these issues if not written properly.
Figure 8.7 provides a high-level view of the role of firmware in a device. You will note that it is the first to interact with the hardware. The simplified role of the main system firmware is to initialize and abstract enough hardware so that the operating systems and their drivers can further configure the hardware to its full functionality. At initial power on or reset, the firmware will initialize hardware and provide visibility into it while preparing a table or list of hardware for accessibility by the operating system. It will then transfer boot operating to the operating system, which will initiate drivers to fully interact with the hardware.
Thankfully, for the IoT administrator, when creating an IoT device from scratch, but using a computer board or controller board, someone has already created the firmware and you must simply use it. If, however, you are starting from a chip, you will have to find an open-source firmware or create one from nothing, which is very challenging. The register documentation for a processor can range from several hundred pages to several thousand pages and this is the reference you would use to code an appropriate firmware solution. Many chip makers will provide a development kit that has a starter firmware package, however, which may be used as a starting point reducing your required efforts.
Remember, if you are building your own firmware, you must also ensure the security of that firmware and not simply the functionality. Security must be embedded in the programming process and not considered as an afterthought.
Operating systems (OSes) are collections of software that exploit the hardware resources of one or more processors to provide a set of services to system users and applications. They also manage secondary memory and I/O devices on behalf of the users and applications. An OS may be developed from scratch but is more often implemented based on existing solutions. Linux is a very popular OS for IoT devices as it can be scaled up or down in features as required and is available as an open source starting point. Windows is used in some embedded solutions but is not nearly as popular as Linux for IoT devices.
To illustrate the role played by the OS, consider an IoT device based on the Raspberry Pi. The Linux OS, in some distribution, is likely to be used. When the application running on the device needs to read the sensors, it will do so by using interfaces provided by the OS and device drivers to access the sensors. Next, when it needs to transmit the data, it will use the OS's networking capabilities to transmit the data to the network destination and it will not have to know if the protocol in use is 802.11, 802.15.4, Bluetooth, or some other protocol. The OS will simply take the information from the Application Layer and transmit it across the appropriate Transport, Network, Data-Link, and Physical layers based on installed hardware and device drivers.
Therefore, the OS provides a large portion of the code required to perform typical operations, hence the name operating system.
The application running on the OS to perform sensor reading and reporting is simply a process or set of processes that run on the OS. The OS provides memory resources to the processes, manages the lifecycle of the processes, and even ensures that the processes run within the appropriate security context. The OS is able to execute the processes from storage and provide for data read and write operations back to storage by the processes. Ultimately, the OS will manage many resources including:
This brings our layers of hardware, firmware, and operating system to another level including applications and services. It is reflected in Figure 8.7.
For some computer boards, a special distribution of Linux exists that is designed specifically for that board. For example, the Raspberry Pi can run the Raspberry Pi OS (originally called Raspbian), which is a customized Linux distribution for that device. Aftermarket distributions are also community created and may better serve a specific purpose. For example, DietPi is a Linux distribution that is optimized for the Raspberry Pi and offers a faster experience than Raspberry Pi OS with the right configuration. DietPi can be up to three times smaller than the Lite versions of Raspberry Pi OS.
Additional distributions exist for very specific purposes. For example, motionEyeOS is a Linux distribution that is designed specifically to turn an SBC into a fairly complete video surveillance system. It works with both USB cameras and network cameras and provides pre-built notification options for motion detection and other events.
To be more complete, other SBCs also have Linux distributions created uniquely for them. For example, the BeagleBone and BeagleBoard components have Debian-based Linux images pre-built for deployment on these devices. Of course, since Debian provides the foundation, the standard Debian update procedures and package management procedures can be used to update the device or install additional software and services. For example:
sudo apt update
sudo apt upgrade
These two commands will update the index of available components (update) and then upgrade the installed components to the newest versions (upgrade).
Using similar commands, you can install components and services from packages. For example, to install Python, the scripting language, you can execute the following command:
sudo apt install python3.9
This command will install the Python 3.9 package. These commands work on the Raspberry Pi OS as well.
When selecting an OS for your IoT device, it is important to ensure the following:
In this final section of the chapter, we will explore the basic process of setting up a custom built IoT device based on the Raspberry Pi 4 and the DS18B20 1-Wire Digital Thermometer shown in Figure 8.8. This simple thermometer uses three pins. Pin 1 is ground, pin 2 is data, and pin 3 is voltage or power. The temperature range of the sensor is from -55 to 125 degrees Celsius. But it is most accurate, with an accuracy of +/- 0.5 degrees Celsius, between -10 and 85 degrees Celsius.
The basic process for building this custom device is outlined below.
sudo apt update
command.sudo apt install wiringpi libonewire-dev
command.- sudo modprobe w1-gpio
- sudo modprobe w1-therm
sudo mkdir /sys/bus/w1/devices/28-*
command.sudo chmod 777 /sys/bus/w1/devices/28-*/w1_slave
command.At this point, you can manually read the temperature data with the following command:
cat /sys/bus/w1/devices/28-*/w1_slave
The resulting output will be text that contains the temperature reading in Celsius. Each time you run the command you will receive the most current temperature reading.
If you wanted to take it further, I'll show you a trick if you don't know Python well. Ask ChatGPT to write Python code that can read the temperature from a DS18B20 1-Wire Thermometer. You will receive code similar to the following:
python
# Import the necessary libraries
import os
import glob
import time
# set up the onewire library
os.system('modprobe w1-gpio')
os.system('modprobe w1-therm')
# Find the file corresponding to the temperature sensor
device_folder = glob.glob('/sys/bus/w1/devices/28*')[0]
device_file = device_folder + '/w1_slave'
# Function to read the temperature from the sensor
def read_temp_raw():
f = open(device_file, 'r')
lines = f.readlines()
f.close()
return lines
# Function to parse the temperature from the raw data
def read_temp():
lines = read_temp_raw()
while lines[0].strip()[-3:] != 'YES':
time.sleep(0.2)
lines = read_temp_raw()
equals_pos = lines[1].find('t=')
if equals_pos != -1:
temp_string = lines[1][equals_pos+2:]
temp_c = float(temp_string) / 1000.0
temp_f = temp_c * 9.0 / 5.0 + 32.0
return temp_c, temp_f
# Read the temperature and print it to the screen
while True:
print(read_temp())
time.sleep(1)
this code sets up the OneWire library, finds the file corresponding to the temperature sensor, and defines two functions: read_temp_raw and read_temp. The read_temp_raw function reads the raw data from the sensor, and the read_temp function parses this data to extract the temperature reading in both Celsius and Fahrenheit. The code then enters a loop that reads the temperature every second and prints it to the screen.
This chapter provided more details on the hardware components that make up IoT devices. This is essential knowledge for any wireless IoT administrator. You learned about the various starting points, including chips, controller boards, and computer boards or single board computers (SBCs). You also learned about the different chip or processor types that can be used, whether as a single starting point or integrated with controller boards or SBCs. Next, you learned about the individual components that may be included in a microcontroller and the operating systems, firmware, applications, and services that may be required. Finally, you explored a basic example of configuring a Raspberry Pi 4 with a temperature sensor using Linux commands or the Python scripting language.
Objectives Covered:
Wireless Sensor Networks (WSNs) are among the most exciting and interesting areas of wireless networking. These networks allow for the gathering of information from the real world using sensors and analysis of that information in the digital world resulting in new understanding, enhanced business operations, improved health care, reduced costs, and so much more. This chapter provides an overview of WSNs followed by an explanation of their architectures, components, and design processes and expands on concepts introduced earlier in the book. The best news is that WSNs are just wireless networks that happen to have clients that include various sensor types. Most of them can be categorized as IoT networks (more on this later). Therefore, the knowledge you've gained so far in this book also applies to them. We will begin with a more thorough definition of a WSN.
You have learned about the five senses: seeing, hearing, smelling, tasting, and touching. Using these senses, we can experience the world around us. You can see the beauty of a sunny day. You can hear the melodic sounds of music. You can smell that delicious dinner just before you taste it and you can touch a newborn baby's cheek or the soft fur of a kitten. These experiences make up our lives and add the ability to interact with our world.
In much the same way now, computer systems can experience the world through the senses of sight, hearing, feeling, and other senses that are not even available naturally to us as humans. They accomplish this action using sensors. Sensors have been around for many years, with some of the earliest designed nearly a century ago. However, the significant introduction of networked computing devices with sensing abilities has changed everything. This change is delivered today in WSNs.
Connected sensors and actuators are sometimes called cyber-physical devices. Collections of these devices and the entire solution, including monitoring and control, are sometimes called cyber-physical systems. A cyber-physical system (CPS) is an orchestration of computers, machines, and people working together to achieve goals using computation, communications, and control (CCC) technologies. Although the term CPS was coined only in 2006 by Helen Gill of the National Science Foundation (NSF), the CCC core technologies of CPS have had a rich and long history. Significant milestones for CPS include control theory in 1868, wireless telegraphy in 1903, cybernetics feedback in 1948, embedded systems in 1961, software engineering in 1968, and ubiquitous computing in 1988.
A WSN is a collection of wirelessly networked sensor devices, effectively forming a core component of a CPS. These devices may connect to the network through direct connections to access points, through ad-hoc networks, through mesh networks, or even through LTE or 5G networks.
WSNs are often considered a subset of the Internet of Things (IoT). Indeed, not all IoT devices are sensors, but sensors, particularly those reporting to a central cloud, may be IoT devices. It is very common to discuss Industrial IoT (IIoT) as a concept without ever using the phrase wireless sensor network even though many IIoT implementations are indeed WSNs connected to the cloud or internal application servers. At the same time, it is essential to realize that the WSN is the local network of sensor devices that may or may not participate in a complete IIoT solution. That is, you can have a WSN that is strictly used within the monitoring and control domain with no data harvested for analysis, decision support, or other business actions. Many would not consider such a WSN an IoT solution, others would. CWNP will not test on this debated concept, but, in general, feels that most WSNs are properly classified as IoT.
A WSN may be under the umbrella of IoT in the minds of many engineers; however, it is important to remember that IoT does not equal a sensor network. Sensor networks may be categorized as IoT, but not all IoT devices are sensor devices. For example, a smartwatch may be an IoT device, but it may not have any health sensors or other sensor components in it. Therefore, it would be an IoT device, but it would not be a sensor device. Similarly, a smart coffee maker may be an IoT device, as it can receive instructions to brew coffee at certain times or on-demand through an app, but it may not sense and report anything back. Therefore, it is an IoT device, but it is not a sensor device.
More importantly, a sensor device is not necessarily an IoT device by itself. IoT is about getting "things" onto the Internet or the network. In many cases, there is no "thing" for the sensor to sense until it is actually connected to or positioned relative to the "thing" it is intended to monitor. Therefore, the "thing" that is brought onto the network is the sensed component, and it is brought onto the network by the sensor, which is already a network device. Some sensors are used locally and are not connected to any network. They simply sense local information, communicate it with local compute modules, that in turn may take action based on the information, but they are not connected outside of the system.
To make this clearer, consider an accelerometer sensor. Without it, a security vehicle moving through a campus is not network connected. With it, the security vehicle can be monitored for movement, sudden impact, and possibly even location. The security vehicle has been converted to a connected vehicle through the use of the accelerometer and location-sensing components in the wireless sensor device.
To summarize, not all sensors are IoT devices and not all WSNs are IoT networks. However, when connected to the "outside world" they become IoT devices and networks.
The concept of Industry 4.0 is also related to wireless sensors and WSNs. Part of the fourth industrial revolution, Industry 4.0, is focused on the use of systems (machines, sensors, robotics, and more) that can monitor and control industrial processes and make decisions without human interaction through decision trees, artificial intelligence (AI), machine learning, and deep learning.
As you can see, multiple concepts (CPS, IoT, IIoT, Industry 4.0, and more) integrate with or depend on wired sensors, wireless sensors, sensor networks and WSNs.
As WSNs grew, the next evolution was the addition of interaction with the physical world. Sensors experience the world around them. Actuators interact with the world around them. A WSAN is a wirelessly networked collection of sensors with the ability to take action or direct that actions be taken in the real world.
For example, with your sense of touch, you may determine that a surface is hot enough that it will cause you harm. That is sensing and sensing alone. However, you also have the ability to actuate a change in response to the sensed stimulus. You can quickly move your hand away from the hot surface. Your sense of touch has actuated your movement away from the heat.
In a WSAN, an actuator may cause an item to move from an area that may cause it damage, change the thermostat settings to reduce or increase the temperature in an area, or stop a conveyor belt when human danger is detected. The key is to understand that actuators can take actions in the real world.
So, the evolution of these systems has taken place as organizations realized that more and more manual processes could be automated. When the first non-wireless or non-connected sensors were used, they were simple measurement instruments. Such devices used some form of sensing to monitor pressure, temperature, and other environmental elements. However, they were connected to gauges and eventually digital readouts that a human had to read physically. To do this, the human had to go to the location of the sensor.
The next step was to add alarms based on electrical circuits. If the sensor passed beyond a particular level (high or low), a circuit could be closed triggering alarms, flashing lights, or other warnings so that engineers would know the urgency of the system.
Next, sensors used special wired connections to send varying electrical signals back to distributed or central monitoring stations. Engineers could go to a few stations to see the health of the entire factory, oil refinery, electric plant, or other facilities.
Finally, sensors were implemented with wireless communications allowing signals to be transmitted over longer distances and without expensive cable runs. Today, data can be passed through the organization's networks to any location - even the cloud. Full centralized monitoring and control are available. The engineers who used to have to walk or drive around to several locations for system monitoring can now use a central dashboard to do the same work and spend more of their time improving efficiencies, enhancing quality, and performing other related tasks.
WSN/WSAN implementations have some unique features that do not exist in all other wireless networks, and it is valuable to explore these features. We will consider the following unique or non-universal characteristics:
The first, and most obvious, distinction of WSNs is the use of sensors in the network devices. Some other network devices have sensors as well, such as GPS radios for location tracking and gyroscopes in cell phones, but the devices containing these components are not primarily designed to sense, the sense functions are extra features or capabilities. WSN devices are there to sense as a priority.
Many, if not most, WSNs consist of self-forming and self-healing networks. Self-forming networks build the links and routes through the network without a central controller dictating the configuration. Self-healing networks make automatic adjustments to the configuration when required based on nodes going offline or moving to a new location.
Many sensor devices are low-rate data harvesting devices, which means that they do not require several Mbps every second or minute to transmit the data they gather. Instead, most of them operate with the need for some number of Kbps and many sit silent much of the time. However, some sensor devices are high-rate data harvesters and will require stable and fast connections. Very few would require more than 10 Mbps today. One exception, of course, would be video cameras with sensing capabilities (such as thermal cameras) that are also sending the video stream to a central monitoring system in real-time.
The devices in a WSN often have extreme energy management requirements. Many of these devices are battery operated, and the desire is that the batteries last months or preferably years and that the batteries be as small as possible. Therefore, strategic power management solutions must be implemented. Instead of batteries alone, some sensors will use solar power or other sources with energy harvesting. Solar power is seen frequently with stationary sensors in oil and gas, agriculture, traffic management, and other applications. In general, energy harvesting simply means that the device can use energy provided from interactions with the environment rather than through a power line. These sources may include solar, wind, temperature variations and electromagnetic fields.
An energy harvesting wireless sensor network (EH-WSN) is a WSN that uses interactions with the surrounding environment to harvest energy. Solar and wind energy harvesting are common examples. In most cases, such components are coupled with batteries for reserved energy storage for use when the environmental source is not available, such as the loss of sunlight or wind. Other energy harvesting devices are mechanical in nature and require human or actuator interaction. For example, a button can be pushed, and the pushing of the button provides sufficient energy to transmit a signal.
Local storage requirements often exist for WSNs. If the sensor stores data for a period before transmission, it is best to have a long-term storage method in the device for recoverability in power failure events. If such functionality is desired, the device must provide for permanent storage. Most wireless sensors use only flash memory, which is cleared on the loss of power. Consideration must be made related to local long-term storage when it is required.
Local processing requirements are also an essential consideration in a WSN. In typical WSN deployments, the sensors report back through the WSN to a sink, which reports to local or cloud-based systems. If data is transmitted from sensors in real-time or near-real-time and communications have low latency, remote processing may suffice. However, in the case of a sensor/actuator integrated device, it may be necessary for specific actuated actions to occur immediately, and this will typically require local processing.
To understand the possible need for local processing (beyond the processing which is available for wireless communications), consider a conveyor belt in a bottling factory. Assume that the bottled beverages reach an end location in the conveyor system and are lifted with a robotic arm to another location (a pick-and-place machine). Usually, the conveyor can continue moving at a defined pace that will ensure the robotic arm is ready for the next bottle. If a sensor/actuator detects a problem with the robotic arm, it must be able to stop the conveyor system (and any systems behind it in the bottling line) immediately. There is insufficient time to wait for communications back to a cloud server or even a server on the local network. By the time the response to stop comes back from the server, several bottles may have been damaged. This example is just one simple example that could be given on the frequent need for local processing.
Local processing may be performed using some form of low-capability CPU or specially designed processors (Application-Specific Integrated Circuits (ASICs)). An ASIC is, effectively, a processor designed for a specific use case rather than for general use like a CPU. ASICs are seen in network switches and other networking devices, and they may be designed for sensors as well. Such limited use processors can accomplish the needed outcome with lower power consumption than a general-purpose processor.
Many sensor networks perform distributed correlation of the sensor data. They will share the sensed data with each other and use this shared information to correlate and analyze the data, to some extent, and report the findings of the correlated data rather than individual data points. This is something somewhat unique to WSNs. Such data correlation will require time synchronization among the sensors so that the analysis is accurate. If the sensors are mobile rather than stationary, location detection may also be required. In more complex scenarios, time synchronization is used among all sensors, and the data is passed (through a device called the sink) to a central processing system on the local network or in the cloud. This same central processing system may receive location data in real-time from the sensors or as archived location data at a point-in-time for proper analysis against a floorplan or location map.
The phrase sensor fusion is used to reference the correlation of data from multiple sensor types so that knowledge may be acquired that would not be available from a single sensor type. For example, knowing that the temperature is rising in an area and the sun is shining on that area allows for the determination of cause.
Before we explore the components that make up a WSN and the architectures available for implementation, we will consider the typical applications or use cases of these solutions. They can be used in the consumer space, industrial space, enterprise space, and even government spaces. The application is practically universal.
Home and office solutions are often quite similar with the exception of component quality. Home, or consumer-grade, devices may be implemented using less-expensive and lower quality components to keep the price down. Office, or enterprise-grade, devices use improved quality components, much like the more expensive and lower noise figure LNAs discussed in Chapter 5. However, small business and small-office-home-office (SOHO) implementations are known for using consumer-grade devices quite frequently.
Regardless of the price, quality, or support associated with the product, similar sensor network solutions may be used in the home and the office as they are both locations where humans spend significant amounts of time. These solutions include:
As an example, consider a video monitoring system that uses thermal sensors in the cameras to detect heat signatures and machine vision to match these signatures to human shapes, animal shapes, and more. The system can respond differently if a dog walks onto your front porch than if a human walks onto your front porch. Dogs may not know better, but you may want to know when a human, assumed to be culturally aware as to the norms of society, has walked onto your front porch. More so, you may wish to know how long that human is in front of the door and be notified if it is beyond the usual times of a delivery driver or a solicitor getting no answer from their knock or ringing of the doorbell. As you can see, such a system can be invaluable, and they can be built with various sensors interconnected and reporting to a central system for intelligent analysis.
For industrial operations, WSNs can provide for cost reduction, processing efficiency improvements, reduced error rates, and more. Typical benefits of industrial WSNs include:
In health care, WSNs are deployed for doctor and nurse location tracking, medical device location tracking, environmental monitoring, drug administration monitoring, vital patient monitoring, and more. The sensors may be fixed throughout the facility, attached to devices, or wearable for patient monitoring and faculty tracking. Today, ingestible sensors are even being used such that a patient may swallow a sensor that then wirelessly reports back its findings. The CWISA need not understand the medical facts related to these components, but it is important to understand how WSNs are used in health care and some of the unique applications.
Non-wearable health IoT or sensor devices may include:
Wearable health IoT or sensor devices may include:
In Smart Agro, WSNs are playing a progressively more active role. These roles include animal monitoring and environmental monitoring. Animal monitoring may be implemented through motion detection and video monitoring with or without thermal imaging. It may also be implemented using RFID or implanted sensors to track location. In less common scenarios, health monitors similar to those used in human health care may be used with animals as well. This is more common in research projects but has been used in animal farming as well.
Virtual fencing is another common solution. Such systems are commonly seen in the consumer market, known as invisible fencing. The animal will receive an acoustic warning, in many cases, when nearing the boundary, which is followed by a light electrical shock if the animal proceeds outside the boundary. These systems are also used in livestock farming in various parts of the world. The sensor detects the electrical signal in a buried or exposed wire and begins the warning process. More advanced sensors can include location tracking capabilities so that the farmer/owner can track the location of the animal.
Environmental monitoring may be related to animal farming but is also related to produce farming. Sensors may be used to track air temperature, wind (anemometer), rain (pluviometers), soil moisture and mineral levels, insect presence, and more. In plant and produce applications, the administrator must consider the impact of foliage. If the WSN is deployed before plant growth, and the introduction of foliage and plant stems was not considered, the network may fail to operate afterward. Therefore, the administrator must implement the appropriate distribution of sensors to allow for changes in SNR, sometimes in excess of 10-12 dB or more, after plant growth. Thankfully, many such sensors require less SNR than, say, an 802.11 wireless laptop used to watch streaming videos from the Internet.
When most people think of transportation and sensors, they immediately think of speed detection. This occurrence is likely because of the time (or times) when their speed was detected by a law enforcement officer, and they received a fine. It certainly makes it a memorable moment. However, many other areas provide an opportunity for the application of sensors and WSNs in transportation, including:
Before discussing these four examples, and many more could be given, it is important to realize that any video camera that is wirelessly connected can be categorized as a wireless sensor assuming it can be provided with intelligence internally or through backend processing. Machine vision can be used to automatically detect objects, people, and events. It can be integrated with alert and action systems for automatic response. Granted, it is not perfect today, but it is seeing continual improvements in accuracy and, soon, we will be able to automatically detect an accident on a highway and send the proper emergency response teams instantly with better than 99.9% accuracy. This will be because sensors in the vehicles will communicate with the WSN along the highway and other sensors participating in the WSN will provide input as well. This sensor fusion will result in very accurate summations of accidents and other events on the highways and roadways. Now, onto the discussion of the preceding three applications in transportation.
Structural health monitoring can be applied to much more than transportation, such as commercial buildings and homes, but it is often a reference to bridges and railways. In a tragic event on August 1, 2007, a bridge spanning the Mississippi River collapsed killing many people and injuring even more. When the bridge was reconstructed, it was implemented with sensors to monitor the structural integrity and provide early warning of problems.
Wired structural health monitoring was originally used in such implementations, but wireless sensors introduce new benefits. They do not require running wires along or through the bridge structure, and they can operate for years on battery power or be provisioned with local energy harvesting. For structural health monitoring, a special kind of sensor known as a strain gauge is likely to be used. Vibration gauges are also utilized in these environments. To conserve energy, such sensors will use sensor-triggered and radio-triggered wake-up events. The majority of the device can go into a deep sleep mode and be triggered to wake when the sensor detects vibration or strain beyond a threshold or when the radio detects a request from another radio.
Traffic flow monitoring and management are also implemented in transportation. Sensors can detect traffic by motion-sensing today and, in the future, they will be able to detect traffic based on beacons or other signals from the vehicles. With this information, automatic traffic light adjustments can be made as well as future planning for road construction to enhance traffic flows.
Autonomous vehicles and assisted driving are in their infancy; however, this is an area where we will continue to see growth. To function effectively, the vehicles must have intelligent sensors in them, which will communicate with each other through wired and wireless networks within the vehicle. Also, they will need to communicate with external roadside networks so that they can interact with the environment more fully. While this concept is young, it will be an area to watch as it experiences continued evolution.
Environmental monitoring solutions may monitor indoor or outdoor environments or both. For outdoor monitoring, solutions include:
For indoor solutions, Indoor Air Quality (IAQ) is an important factor in facilities management. A WSN will allow for the monitoring of IAQ and provide for automatic responses, such as increasing external ventilation, raising or lowering temperatures, increasing or decreasing humidity, and more.
In this section, we have explored just a few of the application areas for sensors and WSNs. We will see continued expansion into new areas in the coming years and the market for both the hardware and individuals who understand how to network them effectively will grow exponentially.
Let's begin our discussion of wireless sensors by first discussing the general concept of a sensor. In industrial monitoring, automation, and control, sensors have been used for decades. They were not equipped with wired or wireless communication capabilities in the beginning. These sensors were and are part of industrial instrumentation and control (IIC) solutions.
A measurement instrument is a component that can detect variations in a process or measurable value. For example, it can measure pressure in a liquid flow process, air pressure in tires, the temperature in a container or the environment, and more. Traditional measurement instruments indeed used sensors, but they were not connected to any remote monitoring solution. Instead, they were connected to gauges and eventually electrical displays (such as LEDs or digital displays). Next, they were connected through wires to remote display consoles and now they can connect through wireless links for remote monitoring and control. Figure 8.1 illustrates the concept of a measurement instrument.
Traditional measurement instruments, whether attached to machinery or other components or portable hand-held devices, required the individual to go to the physical location to view the indication provided. The indication is the momentary reading from the sensor provided through the gauge or LED/LCD display.
For such localized measurement instruments, four primary components provided their functionality:
Figure 8.2 illustrates these components and how they work together. The sensor detects states and passes this to the amplifier as a signal. The amplifier increases the state value as required. The conditioner ensures proper structure or reformats to that structure for display. The display shows the value as it should be understood by the human viewer.
The next step with measurement instruments was to add a recorder to the solution. This allowed the user to view historical values over some time and to possibly even connect a device to the instrument and transfer the values. Figure 8.3 illustrates this added element to the measurement instrument.
Finally, these traditional sensors were integrated with communication relays allowing the detected states to be transmitted to a control center or control room where they could be centrally monitored. These communications were originally transmitted across proprietary wired links and eventually came to use Ethernet links and now have evolved to use wireless connections, which results in the modern WSN. Figure 8.4 illustrates this concept.
With an understanding of traditional sensors, it is a simple leap to wireless sensors. Wireless sensors are traditional sensors, with new types being developed all the time, with wireless communications capabilities. However, because the sensors can communicate wirelessly, new capabilities are added that were more challenging with wired sensors:
Mobile sensors: Special sensors that can be placed on mobile units for tracking and monitoring. Because they are wireless, as long as they move within the range of the overall WSN, they can continue to transmit metrics. If they leave the range of the WSN, they can transmit recorded metrics upon connection.
Remote area sensors: Many areas are in difficult locations from a wired cabling perspective. These can be challenging areas in buildings or outdoors. Wireless sensors with batteries and/or energy harvesting can be placed in these areas.
In-ground sensors: As long as they are not too deep wireless sensors can be placed beneath the Earth's surface for monitoring and reporting. These are sometimes called Wireless Underground Sensor Networks (WUSNs). Of course, depending on the characteristics of the soil (percent sand, silt, and clay) and the moisture levels, propagation becomes a challenge. At 2.4 GHz, less than .5 meters is usually required for effective in-ground communications. Lower frequencies provide greater range, just as they do in free space. At 400-500 MHz, a range of 1 to 1.5 meters is often acceptable. WUSNs require a significantly saturated network of sensors to cover a large area. Depending on the soil, a sub-1 GHz signal will attenuate by 60-120 dB per meter.
In-structure sensors: These are sensors embedded in structures. They may be embedded with cable runs for energy harvesting, or they may be embedded in accessible cavities so that batteries can be replaced every 3 or more years as needed. Those that are battery-only have a significant advantage over wired sensors; however, they have the obvious disadvantage of requiring charged batteries for operation.
Actuators can aim antennas, reposition cameras, or even refocus or move the sensors themselves. In addition, they can interact with other physical objects. For example, an actuator coupled with a sensor may detect an object has been placed on a conveyor belt. Next, the actuator element is triggered to push a button that starts the conveyor belt. Then the actuator waits a period and pushes the button again to stop the conveyor belt. Of course, the actuator could use electrical signals for the same process, but the ability to use electromechanical elements, such as servo motors, to push a button means that legacy equipment can receive input actions from the modern sensor/actuator.
Figure 8.5 shows the common basic model for an IoT-based sensor/actuator. In some implementations, the sensor will exist as one device and the actuator as another. In other implementations, the sensor and actuator will be integrated into a single device. New, smart machinery is being developed with sensors and actuators integrated, but many legacy machines can be retrofitted with add-on sensors and actuators as well.
Whether implementing a WSN or a WSAN, it is useful to understand the various types of sensors that are available. In the next section, you will explore many sensor types and gain a basic understanding of how they function. This understanding will help you in selecting the appropriate sensors or, if the sensors are selected by a facility engineer or another person, it will help you assist them in selecting appropriate locations and ensuring proper signal coverage for connectivity and communications on the network. If a sensor is implemented without the ability to transmit and receive data effectively, it has become no more than a traditional measurement instrument.
Hundreds of sensor types now exist with several variations among many of the sensor types resulting in thousands of options. This statement is a reference to the sensors alone. When you add in the networking functionality of the sensors, the variations can easily pass into the tens of thousands. However, in most cases, selecting the appropriate sensor comes down to the following three factors:
If the answer to these three questions is yes for several sensors, then the decision comes down to support, cost, and enhanced features. For the next few pages, several sensor types will be described to help you understand the capabilities commonly provided.
Temperature Sensors
Temperature sensors are typically implemented with thermistors, thermocouples, resistance temperature detectors (RTDs), or infrared. Thermocouples measure temperature changes over a wide range, when that is required, but are not as accurate as thermistors and RTDs. RTDs have a moderate range in temperature changes and are more stable than thermocouples but are also more expensive. Thermistors are the most accurate but have a narrower temperature range than even RTDs and are subject to self-heating (as are RTDs). Infrared sensors have the advantage of measuring temperature without surface contact but are not as accurate; however, with the use of fiber optic cables, they can measure temperatures outside of the line of sight.
Ultimately, the chosen sensor type will depend on temperature ranges, required accuracy, and the ruggedized design of the solution. Table 8.1 provides a comparison of the characteristics of these sensor types.
Sensor Type | Temperature Range | Accuracy |
---|---|---|
Thermocouple | -250 to 1250 Celsius | ±1 to ±2.2 Celsius |
RTD | -200 to 850 Celsius | ±.5 to ±1 Celsius |
Thermistor | 0 to 200 Celsius | ±.1 to ±.5 Celsius |
Infrared | -50 to 600 Celsius | ±2 Celsius or more |
Optical Sensors
Optical sensors either detect changes in ambient light in the surrounding area or they detect optical beam crossing. Most residential garage door openers use optical beam crossing sensors. When the beam is interrupted, it sends a signal that stops the lowering of the garage door to prevent injury to the person crossing under it.
Ambient light sensors detect the current level of light in the area and can be used to signal changes in temperature control, operational machinery, and more based on the light levels. Figure 8.6 shows the ncd.io long-range wireless light sensor. This particular sensor senses light in the range from 0 to 65k lux at a resolution of 1 lux. It offers a range of up to 28 miles or 45 kilometers for wireless communications and, depending on configuration, can last up to 10 years on 2 AA batteries. It is based on the proprietary DigiMesh (XBee) protocol, which is loosely based on 802.15.4 and can work with the various components manufactured and sold at Digi.com. DigiMesh is similar to Zigbee, except that it is a proprietary protocol and the only devices that can participate in a DigiMesh network are those with embedded modules from Digi. Some modules are available from Digi that can be flashed to function as either Zigbee or XBee (DigiMesh) modules. Flashing is the process of loading a different firmware (software for the hardware) onto the module.
Lux is a measurement of luminous flux (the perceived power of light by the human eye) per unit area. One lux is equal to one lumen per square meter. Without going too deep, here is the critical thing to know: a moonless night provides ambient light of just about 0.0001 lux and direct sunlight on a clear day provides somewhere between 32k and 100k lux. Therefore, an optical sensor that can detect between 0 (0.0001 is greater than 0) and 65k lux can tell the difference between night and day. Some optical sensors have an upper range closer to 100k lux.
Proximity Sensors
Sensors that can detect range or the loss of contact between two items are called proximity sensors. They include sensors that detect open and closed windows and doors as well as ultrasonic range detectors. Door and window sensors typically use magnets to detect the state of the enclosure. A wireless door/window sensor would trigger an alert for an open or close event depending on the configuration of the system. This alert would be communicated with the WSN. Dry contact sensors also exist, which simply detect whether a contact exists (shorted) or does not exist (opened) between two wires.
The ultrasonic range detectors work by transmitting sound waves above the level of human hearing (high frequency) and measuring the time it takes for them to be reflected back. These devices can detect the presence of new items (something has moved in front of the sensor changing the time of reflection) and are also used to detect levels in containers (such as large silos or storage tankers). They can be used for many scenarios, including:
Figure 8.7 shows the RADIO Bridge Armored Sensor designed for outdoor or industrial use cases. It provides a 5-to 10-year battery life and can function on Sigfox, LoRa/LoRaWAN, and NB-IoT networks.
Sigfox is a subscriber-based network existing in many countries around the world. It is a low-bandwidth network used by many (when coverage exists in their area) to implement IoT solutions and is currently most popular in Europe though they have several coverage areas in South America, Canada, and Australia at the time of writing.
LoRa/LoRaWAN is covered in chapter 7.
Movement Sensors
Movement or motion sensors detect movement in the target area or movement by the monitored machinery or component. These sensors can be acceleration-based, tilt-based, or use several methods of motion detection. Movement by the monitored machinery is typically acceleration-based or tilt-based. Motion detection may use Passive Infrared (PIR), Microwave (MW), ultrasonic, or area reflective methods.
Acceleration-based sensors use an internal accelerometer to detect movement. Tilt-based sensors use accelerometers or gyroscopes to detect movement from vertical to horizontal and vice versa. These methods can be used to determine if an attached device or item is stationary or moving. Alerts can be sent through the WSN for monitoring of movement or notification to the appropriate personnel if the item is not supposed to be moving. Coupled with a tracking solution, such sensors can aid in the location of a machine and track the use of that machine (based on movement) over time.
For motion detection, PIR is the most widely used in home or consumer devices and is very popular in enterprise and business settings as well. It detects body heat or infrared energy. The MW sensors work by transmitting microwaves and measuring reflection back from objects. Ultrasonic sensors, as we've seen, use high-frequency sound waves and detect movement through variations in the reflected round-trip-times. Area reflective sensors also use reflection response times based on infrared emissions from LEDs. As you can see, much of motion sensing is about reflections of various waves including sound and electromagnetic waves.
Some motion sensors will combine multiple types, such as infrared and MW, in order to limit false positives. When both sensors are tripped, it is far more likely to be a true positive for the type of motion being detected.
Liquid Sensors
When you need to detect water, fuel, and other liquids, various liquid sensors may be used. One such sensor is the water rope sensor. Such sensors can be placed with the rope extending into a container or area. If any part of the rope senses a liquid an alert is triggered. They work well for many scenarios, including:
Water rope sensors work by using two sensing wires wound around the "rope" material. When the material gets wet, it closes the circuit and water, or another fluid has been detected.
An additional liquid sensor type is a spot leak detector. These devices have probes that extend down from them and if water rises to the level of the probes it closes the circuit and results in fluid detection.
In the category of liquid sensors, you might include flow sensors. These sensors are often used to indicate leaks in pipes or hoses due to excess flow. They may also have freeze sensors in them to detect temperatures below the freezing point for a given fluid.
Air Sensors
Air sensors monitor the air for air quality purposes, such as CO detection, humidity levels, and the detection of other contaminants. The Edimax AI-2002W sensor shown in Figure 8.8 is a 7-in-1 air quality sensor. It detects the following:
The PM2.5 and PM10 sensors detect airborne particles. The CO2 sensor detects Carbon Dioxide levels. Total Volatile Organic Compound (TVOC) sensors detect organic chemicals from paints, cleaning supplies, and other possible harmful sources. HCHO is a formaldehyde sensor, which can be diffused into the air from remnants left in the manufacturing of furniture and other items. Clearly, this sensor can detect many possible detriments to air quality, while also tracking temperature and humidity levels.
The AI-2002W connects to your Wi-Fi network and can be monitored and controlled through the Edimax cloud service.
Strain Gauge Sensors
Often used in structural health monitoring, these sensors detect strain at very low levels. Figure 8.9 shows the RESENSYS SenSpot strain gauge sensor, which is simply fastened to the structure with adhesive and runs for up to ten years on the batteries.
Strain is the amount of deformation a material experiences under an applied force. It is measured by ratios comparing the current state to the original state. Specifically, it is the change in length of a material compared to the original state. Axial strain measures the stretching of a material when compressed (like pressing against a wall). Bending strain measures the stretching of the material in a direction based on downward pressure (like the bending of a stick when you press down on it). These are the two most common measurements.
When metallic structures are placed under mechanical strain, they exhibit subtle changes in their resistance. Strain gauges measure this change in resistance in microstrains, and this is the basic building block of a strain gauge sensor.
Sensor Connector Nodes
Sensor connector nodes are used to connect various sensors to the WSN. They provide standard analog interfaces for common sensor types. The LORD SG-Link-200 is shown in Figure 8.10. Various sensor types can be connected with a device like the SG-Link-200 and participate in the WSN. This particular connector node also includes an onboard temperature sensor so that you can harvest temperature information as well as the information provided by the connected input sensor. The device uses an AMPSEAL 16 14-pin connector to connect external sensors and operates using 802.15.4 for wireless communications in 2.4 GHz.
BEYOND THE EXAM: I Found a Mouse in My Salad!
In Australia, customers were buying loose leaf spinach for salads and other delicious recipes. The only problem was that mice like these spinach leaves too. Several customers reported finding mice in the sealed plastic bags of spinach leaves. Of course, this was a big problem for the packager, and so they sought a solution.
The solution was to implement a sensor to detect mice as the spinach leaves passed through the conveyor system. The question was, how do you accomplish such a task? The answer was found in the use of a video camera with a 90-degree viewing angle monitoring the conveyor as the spinach leaves passed by. The selected camera was the FLIR A65, which is a thermal imaging camera. Using machine vision, the system was able to detect thermal patterns in the spinach leaves that were above the normal temperature of spinach. When such items were detected, the system triggered an alert so that a handler could locate the apparently living entity (like a mouse) and remove it from the conveyor.
This is just one example of a creative solution using sensors. Of course, this could be taken further such that the monitoring system used a wireless thermal imaging camera to transmit the video feedback to a remote central monitoring system, but whether localized, wired or wireless, you can see that sensors can be implemented in creative ways to solve real-world problems.
-Tom
WSN nodes are often called motes in honor of the first WSN nodes created at the University of Berkeley by Rene and Mica Motes. They are more often simply called modules, nodes, sensors, or wireless sensors, but you will encounter the term mote from time-to-time. Additionally, OpenMote is a device that provides open-source solutions to building IoT end devices.
This section has provided an overview of some of the most common sensor types. With this understanding, you can better comprehend the value introduced by placing these sensors onto a WSN. Aggregating disparate data from multiple sensors to a central dashboard can provide valuable insights into activities, incidents, processes, and events within a facility or outdoor environment.
At the highest layer of abstraction, we can consider WSNs from five basic architectural perspectives:
small-, medium-, large- and very large-scale WSNs: the size of the WSN varies depending on several factors such as the sensors' characteristics, the Return on Investment (ROI), and the user's requirements. In practice, the number of sensor nodes in a WSN may be in the order of tens, hundreds, thousands, or even tens of thousands.
homogeneous versus heterogeneous WSNs: a WSN may be homogeneous or heterogeneous. A WSN is homogeneous if all sensors of the network have the same capabilities (sensing, processing, communication, etc). A heterogeneous WSN consists of sensors endowed with different capacities, which may serve for different applications. Typically, some sensors will have more resources available, such as processing and energy, than the rest of the sensors.
stationary, mobile, and hybrid WSNs: a WSN may be stationary, mobile, or hybrid. A stationary WSN is a network consisting of stationary sensor nodes that cannot move once deployed. With the advances in mobile devices, some of the sensors are able to move on their own; this is generally achieved by embedding the sensors on mobile platforms. A mobile WSN comprises only mobile sensors, while a hybrid WSN consists of both stationary and mobile sensors.
flat versus hierarchical WSNs: in flat WSNs, all the sensor nodes are assumed to be homogeneous and play the same role. However, in hierarchical WSNs, a sensor node can be dedicated to a particular special function. For instance, a sensor could be designated as a cluster-head, in charge of communicating with adjacent clusters.
single-hop versus multi-hop WSNs: in a single-hop WSN, sensor nodes transmit their data directly to the sink. In a multi-hop WSN, multiple relaying sensor nodes exist between sensors and sinks. A multi-hop WSN can be flat or hierarchical.
The remainder of this section will focus on conceptual and actual architectures used in WSNs and WSANs.
The Industrial Internet Consortium has developed the IIoT Architecture Framework as a model for successful industrial IoT implementations. It explains the components of the network and how they interact and important considerations in their implementation.
This framework document is designed "to aid in the development, documentation and communication of the IIAF. The reference architecture uses a common vocabulary and a standard-based framework to describe business, usage, functional and implementation viewpoints that it has defined." Further it states that, "A reference architecture provides guidance for the development of system, solution and application architectures. It provides common and consistent definitions for the system of interest, its decompositions and design patterns, and a common vocabulary with which to discuss the specification of implementations and compare options." (Industrial Internet of Things Volume G1: Reference Architecture v1.9, 2019)
The point of a reference architecture for IIoT is similar to that of the OSI Reference Model. It provides a conceptual way of thinking about the technologies and a shared language that professionals can use to communicate. This is much needed in the high variety IoT space, and this reference model can be beneficial beyond the scope of industrial IoT into health, retail, and agricultural IoT as well, though some systems and concepts may not universally apply.
The introduction of IoT into the industrial world brings an intersection between Information Technology (IT) and Operational Technology (OT). IT has been the realm of programmers, systems administrators, and network engineers. OT has been the realm of mechanical engineers, chemical engineers, and other control engineers. With IoT, the two must work together to present the analog nature of OT in the digital world of IT and to convert the digital world of IT into the analog world of OT. For this reason, it is essential to have teams, including OT engineers and IT engineers, when deploying IIoT or WSNs.
At the foundation of the IC IIoT framework (called the Industrial Internet Architecture Framework (IIAF) in shorthand) is the physical systems or the actual sensors and actuators participating in the WSN and the larger network(s) supporting the WSN.
The Functional Domain includes sensing, actuating, and control. However, within the Functional Domain, data is passed upwards to become information that can be acted upon by operations or processed by applications based on business requirements.
Within the Functional Domain is the Control Domain (or sub-domain). It can be divided into several components, and this breakdown is shown in Figure 8.12.
The components of the control domain include:
These components allow for communications, ultimately, with external entities and a level of abstraction so that disparate systems can interact.
Above the Control Domain sits the Operations, Information, and Application Domains within the Functional Domain. You can think of the Control Domain as management of operations within a single plant or facility and the Operations Domain as management of multiple facilities or an entire organization at all locations. It provides top-level policies and controls to be enforced downwards.
The Information Domain is about converting data to information. It is a primary intersection point between OT and IT. According to the IIAF, the Information Domain is "a functional domain for managing and processing data. It represents the collection of functions for gathering data from various domains, most significantly from the control domain, and transforming, persisting, and modeling or analyzing those data to acquire high-level intelligence about the overall system. The data collection and analysis functions in this domain are complementary to those implemented in the control domain. In the control domain, these functions participate directly in the immediate control of the physical systems whereas in the information domain they are for aiding decision-making, optimization of system-wide operations and improving the system models over the long term."
The Application Domain is the functional domain for the implementation of application logic. The logic implemented here should not be fine-grained, such as that at the Control Domain, but focused on realizing business goals or functions.
Finally, within the functional domains is the Business Domain. Here is where you would find Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Product Lifecycle Management (PLM), and Human Resource Management (HRM). For example, ERP may be required to ensure that hardware is ordered for a new machine that will be implemented on a factory floor. Hence the need for integration of business units with the IT/OT operations.
Given that the functional domains comprise the foundation of IIAF for the wireless solutions administrator, the remaining components of the architecture framework will not be evaluated in detail here. The document can be freely downloaded at the IC website at www.iiconsortium.org in the Technical and Whitepapers section of the site.
However, it is useful to summarize the remaining elements:
Trustworthiness: This element is about ensuring that the WSN/IIoT implementation is as robust and capable as traditional OT systems. It includes the assurance of safety, security, resilience, reliability, and privacy. The various architectural models actually provided by vendor solutions (whether Zigbee, 802.15.4, DigiMesh, NB-IoT or any other) should be implemented with trustworthiness in mind. As we go forward into the fourth industrial revolution, we do not want to lose all the gains we have achieved in operational efficiency and effectiveness over the past fifty years.
Scalability: This element is provided by nature with many WSN solutions. As they are based on mesh and peer-to-peer architectures, the result is that scalability is available in the WSN. However, the wireless solutions administrator should be careful to ensure that scalability is also provided by the higher layers, such as in the operations, information, application, and business domains.
Crosscutting Functions: These components are those that may be required across functional domains while providing trustworthiness and scalability. Connectivity is a given requirement, but the CWISA should understand how this connectivity occurs in the selected solution. Distributed Data Management allows for the collected data to be made available where it is needed and when it is needed. Industrial Analytics allows for centralized or distributed reporting and analysis of operations. Finally, Intelligent & Resilient Control ensures that control functions are available to each functional domain as required.
A common architectural model, also specified within IIAF, that is a practical implementation model is the three-tier architecture shown in Figure 8.13. This model includes a proximity network, an access network, and the service network.
In Figure 8.13, the service network spans the platform tier and the enterprise tier. The access network spans the edge tier and the platform tier. The edge tier and, precisely, the proximity network can be thought of as the actual WSN in this model. The access and service networks (the platform and enterprise tiers) provide the supporting operations, monitoring, analysis, and control. Notice that the sensors and actuators exist in the edge tier as do the gateways or coordinators that provide access to the supporting services.
It is also worth noting that the enterprise tier, shown in Figure 8.13, is often provided entirely within cloud services today. It may be that an organization runs their own enterprise tier in a cloud service such as Amazon Web Services (AWS) or Microsoft Azure or it may be an implementation depending on the WSN vendor's cloud service. Many vendors now provide such services so that the implementation of servers and primary application code is already provided through their cloud. We have seen much movement towards the cloud in the enterprise wireless LAN market, and we now see this as well in the WSN market.
It is important to consider the impact of cloud-based WSN management and this is illustrated in Figure 8.14. In the image, the cloud would be synonymous with the enterprise tier, the on-premises component would be synonymous with the platform tier, and the on-sensor component would be synonymous with the edge tier.
An important revelation from Figure 8.14 is that the cloud can provide much more processing power, but it also results in slower control speed due to latency in communications with the cloud. As a wireless solutions administrator, you may be called upon to evaluate a cloud-based solution and you should determine what is provided in the cloud, on-premises, and on-sensor in the decision-making process.
Processing at the sensors is far less capable but can give near-instant responses and is particularly beneficial when actuators are involved. However, the cloud services can provide much more processing power for machine learning and AI so that long-term decision making is more efficient and accurate. Generally speaking, absolute decisions are made closer to the sensors, and business logic decisions or variable decisions can be made on-premises or in the cloud.
The remaining content in this section will describe two general architectures. These include the general architectures of a hierarchical architecture and a mesh architecture. For specifics on Zigbee and LoRaWAN architectures, see Chapter 7.
In a hierarchical architecture, the wireless sensors connect to a specific node that is either the gateway onto the network at large or is a router providing connectivity to the gateway. A simple hierarchical architecture connects all sensors to a single gateway and is sufficient for WSNs in smaller facilities or small-scale outdoor deployments. A complex hierarchical architecture may use clusters with all sensors in a cluster connecting to a cluster head, which is then connected to the gateway. Other terms for gateway include base station, coordinator, hub, and controller.
The simple hierarchical architecture is shown in Figure 8.14. The complex hierarchical architecture is shown in Figure 8.15. Variations on these architectures exist, and you should check your vendor literature to see the options available for building a WSN with their solution. Many vendors support mesh or hierarchical implementations depending on your requirements.
The terminology used by different vendors will vary, but the concepts remain the same. A hierarchical architecture generally depends on all sensors connecting to a specific kind of device, which is, in turn, either connected to the rest of the business network or connected to a gateway that is connected to the network. Of course, the gateway may connect directly to a WAN solution, such as a low-power WAN (LPWAN) for connectivity back to the enterprise network or a cloud service provider.
Unlike the hierarchical architecture, in a mesh architecture, multiple nodes can use multiple routes to reach the gateway. Mesh architectures support full mesh and partial mesh implementations. Some vendors support both full and partial models, and other vendors only support one model. Figure 8.16 illustrates a full mesh and a partial mesh architecture. The full mesh is on the left, and the partial mesh is on the right.
In a full mesh architecture, every node connects to every other node. In a partial mesh architecture, some nodes have only one connection to the mesh, others have multiple, but every node is not connected with every node.
Planning a WSN is a similar process to any other wireless solution and includes defining requirements and constraints, selecting appropriate solutions, and planning for ongoing support (automation, integration, monitoring, and management). This final section of the chapter will outline the considerations for WSNs in these areas. Chapters 1 through 3 provided a more in-depth overview of these considerations as they apply to all wireless solutions.
To determine the basic requirements for your WSN, consider the following questions:
In specific environments, additional questions may need to be answered, but this list should be addressed in nearly all WSN planning projects.
To determine constraints related to your WSN, consider the following questions:
Supervisory Control and Data Acquisition (SCADA) and Distributed Control System (DCS) are existing OT solutions for monitoring, operating, and controlling many systems in industrial and other organizations. They are used in manufacturing, oil and gas, and even transportation systems. If they exist, they may need to be considered in your WSN deployment.
In specific environments, additional questions may need to be answered, but this list should be addressed in most WSN planning projects.
The next step, once requirements and constraints are defined, which may involve use case development, is to select a WSN solution. This may involve assembling wireless sensors from varying vendors that support a shared communication protocol and management system, or you may be able to find a single vendor that can provide them all.
As another example, Monnit (www.monnit.com) provides wireless sensors that operate at 433, 868, and 900 MHz as well as gateways for Ethernet, LTE, 3G, and 2G. On the wireless link, Monnit gateways and sensors use the proprietary Alta protocol (FHSS modulation) for communications with their newest line of devices. They also provide what they call Standard (Gen 1) devices that use the same frequency ranges but implement DSSS modulation.
Monnit offers the following sensor types:
Remember, when considering a WSN solution provider, be sure to learn about the protocols they are using and answer the all-important question: Are you willing to be attached to a vendor that does not use standard (open) protocols? In some cases, it poses no problems. In others, it prevents doing business with that provider. Just be sure you know what you're purchasing.
An additional factor in selecting a WSN provider is the management methods. Some are managed only through the cloud. Others offer cloud management and local management. Still others offer only local management. Additionally, consider whether the vendor offers APIs (application programming interfaces) for integration with other systems.
When it comes to automation, integration, and monitoring, it is important to evaluate the APIs that are available. A WSN solution that does not expose the data through APIs must be manually integrated through database access or conversion, which may require periodic exports of data and imports into other databases. With APIs, such tedious work is seldom required.
It is important to discover the frequency bands used for the solution so that you can acquire appropriate tools for monitoring and troubleshooting. For example, a 2.4 GHz spectrum analyzer will not assist in locating sources of interference related to a 900 MHz WSN. Dedicated spectrum analyzers that operate in specific ranges, like the RF Explorer WSUB1G, can be acquired for under $200 US, though the resolution will not be as high.
Another part of integration is ensuring proper frequency coordination. You must be sure that the WSN will not interfere with other wireless solutions and that the other wireless solutions in your coverage area will not interfere with it.
Finally, you should ensure that the management solution will work well for your environment. If it is cloud-based, it is likely that it will work well for any organization; however, if latency is an important factor in your WSN, you must ensure that proper local controls are available for actuators and alerts.
This chapter may have seemed rather large based on the objectives covered. However, the concept of a WSN is a large part of IoT, particularly in industrial networks. It is a significant part of smart cities, smart agro, smart homes, smart offices, and Industry 4.0.
Professional-level certifications will be offered by CWNP covering WSNs in even more detail because of the in-depth coverage of wireless IoT protocols used in many WSNs today. The Certified Wireless IoT Connectivity Professional (CWICP) will address the complexities of IoT in its many forms, which is inclusive of IIoT and WSNs.
Objectives Covered:
Wireless communication technologies are a key part of our lives. From the Wi-Fi networks we use at home or office to the more complex machine-to-machine communication in the robotics or manufacturing industry, we live in a world of wireless connectivity. It is almost impossible to spend a single day without using any wireless device. With all the blessings that these wireless technologies bring in terms of mobility and flexibility, many security concerns arise as well.
Implementing proper security controls for the selected wireless networks can no longer be an afterthought. Security must be incorporated and addressed from the initial planning and design phases throughout the lifecycle of the network. In this chapter, we will explain the fundamental security concepts that should be addressed to secure wireless networks. We will then explain the importance and need for authentication. Afterward, we will explain some vital cryptographic technologies that can be leveraged to achieve some of the security goals. A brief high-level description of the commonly used authentication methods is discussed next, followed by a brief explanation of authorization concepts. Finally, monitoring will be briefly discussed to ensure availability and integrity of the deployed network.
Like any project, we need first to understand the requirements and objectives before we start designing the solution. Generally speaking, the attack vectors on wireless are aimed at adversely affecting one or more of these key security principles of confidentiality, integrity, and availability similar to a wired network. Therefore, it is essential to explain these principles which our security controls should address. Please note however that it is not mandatory to meet all these requirements all the time.
Confidentiality: You send a private message over a wireless link, and third-party systems are able to read and understand the content of the message without your consent or knowledge. Your documents and images on your favorite cloud storage website are made public without your consent or knowledge. All of these are examples of violation of confidentiality. Confidentiality ensures that only authorized people/systems should have access to information and this information shouldn't be shared with third party without your consent. The information must be protected from unauthorized disclosure.
Integrity: You check your bank statement, and you realize that some transactions were added, and you are charged for items you didn't order. You order an item online, but you get charged a different price on your card as compared to the invoice. You send a message over a wireless link, and it gets altered before reaching the intended recipient. You send an order to your smart lock to lock the door, but it gets as unlock instead. All of these are examples of integrity violation. Integrity is the guarantee of data non-alteration. Data and systems should be protected from intentional, unauthorized, or accidental changes. If any alterations happen, the intended recipient should be able to identify that data was altered.
Availability: You try to access your bank's online account, but it shows as down. You try to send an email, but it just doesn't get sent. You try to access the Wi-Fi network, but it is not available. You lost control of the IoT sensors controlling your smart home or your smart car. All of these are examples of violation of availability. Availability is the guarantee that data and systems are operating and accessible when required in a timely manner.
Confidentiality, Integrity, and Availability principles can be also complemented by other closely related security concepts. Privacy, non-repudiation, authenticity, and safety are four key concepts that strengthen the CIA concepts, and that might need to be considered as well in securing the networks.
Privacy: The terms "privacy" and "confidentiality" are often used interchangeably as they have a lot in common. However, "privacy" is more concerned about the right of the individual to keep his personal information to "himself" or "herself". It is the right of the individual not to be recorded or monitored. For example, do you want to share your browsing history? Your shopping history? Your location? Your social media information? Do you want your camera at home to start recording your conversations and uploading them to an unknown party? Do you want everyone to have access to your bank account? Your health records? Do you want your car to share your location? Most likely, you consider this information to be private and you don't want to share it with everyone. The enormous use of sensors and connected devices in our lives, whether at home, in the car, or as wearable devices pose a serious privacy concern that should be addressed.
On the other hand, confidentiality is the guarantee of data privacy. The information must be protected from unauthorized disclosure. Confidentiality limits access to information to authorized entities so that it helps in achieving "privacy" for the consumers. This is a very important concept nowadays where everything is connected, and the information collected from these systems can be used to identify or track a person.
Non-repudiation: You order an item online, and then you deny that you have ordered this item and refuse to settle the amount to the bank. While being angry, you send a harsh email to your colleague then you deny sending it. You launch a wireless attack from your device, and then you deny that you have done it. All of these are examples of violation of non-repudiation concept. Non-repudiation prevents a person or entity from denying having performed an action. Proper measures should be in place that prevents the subject from repudiating the claim against them. If an entity performs an action, proper unequivocal evidence should be available that confirms that this action was done by this entity.
Authenticity: You try to access your bank account, but you get redirected to an attacker website who crafted a website that exactly looks like your bank's website so you erroneously login there. You try to access the Wi-Fi network, but you connect to an "evil twin" network created by the attacker. These are examples of violation of authenticity principle. Authenticity principle tries to confirm the identity of the parties that are involved in the transaction to make sure that each entity is whom it claims to be.
Safety is another very closely related concept that should be considered as well, especially in our world now where everything is connected. What if some attacker takes control of your connected car and disables the brakes? What if an attacker takes control over the connected door locks and unlocks them or keeps them locked? What if an attacker takes control of the heating systems? What if an attacker takes control of a power system and turns off the light in a city? Safety can be considered as part of availability and integrity, but we prefer to mention it here alone to highlight the importance of this aspect in our design. We are relying more and more on wireless networks, and the unavailability of a system can impact one's safety nowadays.
Now that we have explained the critical security principles that should be considered, we will now focus on the first step of authentication so that the users/devices are able to use the wireless network.
Authentication is the first foundational step in the AAA model. Both authorization and accounting steps rely on having reliable and secure authentication. If the authentication is broken, both authorization and accounting steps are of no practical use. Authentication aims at verifying the identity of the person or object that is connecting to the resource. It is very critical to determine who or what will be authenticated. Will only one side, for example server-side, be authenticated or will both sides, for example client and server, be authenticated? When both sides (client and server) are authenticated, this is called mutual authentication. This helps prevent a third-party device from intercepting the authentication and pretending to be the other party.
Unlike wired networks which are directly linked to a particular location, wireless networks span a larger location. For example, in a wired network, you might be able to trace a user/device to a particular wall outlet / switch port while in wireless networks you might be able to trace a user/device to a particular Access Point (AP) or Base Transceiver Station (BTS) so your location accuracy will be very different as compared to wired. It is true that triangulation systems exist and can help in locating a user/device, but due to mobility, it might harder to locate a device as it keeps moving.
Another important point to consider is the risk of eavesdropping and spoofing. An attacker might use a simple receiver operating at the same frequencies and listen to the communication between two users/devices. The attacker just needs to be in range of the wireless communication, and he can do this without even being detected. If the communication is not encrypted, it will be trivial for the attacker to steal the authentication credentials and use them to gain access. That's why it is very critical to have proper authentication systems to protect the wireless network and ensure only authenticated devices/users have access to the network.
There are many authentication methods that can be used for authentication, some of which can be considered basic authentication, while others are more advanced authentication methods. The exact method to be used depends on the technology and device capability, intended use-case, ease-of-use, cost and value of the asset being protected.
In terms of technology and device support, if you are deploying a solution to authenticate temperature or humidity sensors, you use authentication methods that are supported by these sensors and not biometric solutions, for example. The device capabilities play an important role in selecting which technology can be used. If you are using cellular networks, you can leverage the SIM cards for authentication, but you can't use the same for BLE. The technology that you are using dictates the supported authentication methods. Also, one critical point to consider is the support for the chosen authentication method by all parties involved. You don't want to end up deploying a solution that is supported by your server, but no clients support it. You also need to think about how this solution integrates with your other components/systems in your network.
In terms of the intended use-case, you need to understand what are you trying to authenticate? Are you trying to authenticate both ends of the communication or only one party? How will the device/user be authenticated? Based on something they know? Something they have? Something they own? A combination of the above? All of these will impact your logical choices of authentication methods.
In terms of ease-of-use, you can deploy a very secure authentication method, but it will be tough for the users to use it or for the devices to be configured. For example, you might set a requirement of having a password of 30 characters that should be changed daily. Yes, this solution might offer stronger security, but it is not practically usable. You might design the solution based on certificates but installing certificates on the devices might be a very tedious task. You need to find the right balance between proper security and usability.
In terms of cost and value of the asset being protected, you need to consider the value of the asset being protected from both direct monetary and non-monetary values versus the cost of the solution being deployed. Let's consider this example to better understand the non-monetary value. If you buy a laptop for $1000, its monetary value is $1000, and this value will decrease with time due to depreciation. However, for the past six months, you have been working on a massive project, and all the design documents are saved on this laptop. Do you still value this laptop for $1000 or is it way more expensive? What if the laptop was hacked and your documents were sent to your competitor? Is it more valuable now? What is the price of the data on this laptop? What is the price of time spent to come up with these results? What is the price for the "bad" reputation of having your computer/system hacked? Therefore, adequate authentication solutions should be deployed in place to protect the assets. This is an oversimplified example just to explain the concept.
Regardless of the authentication method used, the majority of authentication methods use some cryptographic technologies to achieve the required goals. In the next section, we will discuss the key cryptographic concepts of encryption, PKI, hashing, message authentication code, digital signatures, and nonce. As you read throughout this section, try to understand the goal of each technology, and understand how it can help achieve the security goals discussed in later sections. Afterward, we will discuss some common authentication methods that can be used.
Encryption tools can be used for confidentiality, integrity, and the implementation of authentication components. For example, if I encrypt something with a key and you can decrypt it properly, it indicates that I have the same key as you. This kind of authentication is often called pre-shared key authentication in wireless networks. In this section, we will explore important technologies and concepts related to encryption.
Due to the nature of wireless communications, wireless signals are sent over the air. It is critical to protect these communications from an eavesdropper trying to listen or spy on the channel. The best tool to protect from a malicious eavesdropper is to use encryption.
Encryption is the method of converting a plaintext or any other form of data to another encoded format that can be only decoded by another party which knows the decryption key. Therefore, the main goal of encryption is to achieve confidentiality. Any third-party entity will only see the encrypted text or data and will not be able to understand the content of the message even if it knows the encryption algorithm used.
On the sender side, encryption happens, as shown in Figure 10.1. The message or content that the sender wants to send is encrypted and converted to a ciphered text. The ciphered text is sent over the unsecured wireless medium.
On the receiver side, decryption happens, as shown in Figure 10.2. The encrypted or ciphered text is converted back to the plain text.
There are two main types of algorithms: symmetric key algorithms and asymmetric key algorithms as explained below.
Symmetric key algorithms use the same secret key for both encryption and decryption. Let's say we need to send the message:
I LOVE CWNP!
The sender can encrypt the plaintext message with the secret key he knows using the chosen encryption algorithm. The output will be an encrypted text, as shown in Figure 10.3. The sender will send the encrypted text:
XA5¡Yo3iF5tEohMv5mgmMw=
instead of sending the actual plaintext "I LOVE CWNP!". As such, even if an eavesdropper is listening to the message, he/she will not be able to understand the content of the message. Even if the attacker records the message "XA5¡Yo3iF5tEohMv5mgmMw=" sent over the air, they will not be able to decipher it since the attacker doesn't know the secret key.
On the receiver end, the receiver will get the encrypted message:
XA5¡Yo3iF5tEohMv5mgmMw=
and will use the decryption algorithm with the same secret key to decrypt the message and get the original message:
I LOVE CWNP!
Note that for this example, we have used the website https://aesencryption.net/ with the secret key Lm70kCoPHI and 128 bits encryption. You can try it yourself to encrypt/decrypt some messages.
To use aesencryption.net:
You can, of course, use the site to work with different input values and encryption bit-depths to see more about the process. Additionally, knowing about the site makes for an excellent way to secretly communicate with others using a predetermined key, if you're into that kind of spy game.
It is essential to keep this secret key secure as the entity that has this key can use it to encrypt or decrypt any message. This key is commonly referred to as single key, shared key, or session key.
Symmetric key algorithms are, in general, faster than asymmetric algorithms. Symmetric key algorithms use either block ciphers or stream ciphers. Block ciphers take a predetermined number of bits, known as a block, and encrypt it. Stream ciphers encrypt data one bit, or one byte, at a time in a stream.
There are many commonly used symmetric key algorithms as listed in Table 10.1 on the next page. With advancement in quantum computing and computer processing power, some of these algorithms are considered weak nowadays like DES, Skipjack, Blowfish or even 3DES.
It is very important to note that the strength of the algorithm doesn't depend on the algorithm being secret but rather on the mathematical strength of the encryption/decryption algorithms and the key length.
For example, the recommended symmetric algorithm to protect sensitive but unclassified information by National Institute of Standards and Technology NIST's latest report "Transitioning the Use of Cryptographic Algorithms and Key Lengths" is AES. 3DES / TDES is now being deprecated.
Asymmetric algorithms, also known as Public Key Algorithms, use different keys for encryption and decryption, as shown in Figure 10.4. Every entity has a pair of keys: a public key and a private key. From its name, the public key is public and can be shared with any other entity. Similarly, from its name, the private key should be kept private.
The public and private keys are mathematically linked by one-way functions such that if a message is encrypted with the public key, it can only be decrypted with the associated private key. Also, even if someone knows the public key, it is not possible for him/her to calculate the private key.
For the sender to securely send the message, the sender will encrypt it with the receiver's public key. As such, the sender is sure that no one decrypts it except the receiver, who has the associated private key. . The receiver wil use its private key and decrypt the message.
Symmetric Key Algorithm | Block Size (Bits) | Key Size (Bits) |
---|---|---|
Data Encryption Standard (DES) | 64 | 56 |
SkipJack | 64 | 80 |
International Data Encryption Algorithm (IDEA) | 64 | 128 |
Blowfish | 64 | 32-448 |
Twofish | 128 | 128, 192, 256 |
Triple Data Encryption Standard (3DES or TDES) | 64 | 112, 168 |
Advanced Encryption Standard (AES) | 128 | 128, 192, 256 |
Rivest Cipher 2 (RC2) | 64 | 8-1024 |
Rivest Cipher 4 (RC4) | Stream Cipher | 40-256 (Commonly 128) |
Rivest Cipher 5 (RC5) | 32, 64, 128 (Recommended 64) | 0-2040 |
Rivest Cipher 6 (RC6) | 128 | 0-2040 |
The same will happen when the receiver needs to send back a message to the sender. It will use the sender's Public Key (which is now the receiver) to encrypt the message. The sender (which is now the receiver) will decrypt it with its Private Key. Note that for the above example, we have used the website https://8gwifi.org/rsafunctions.jsp with the default Public and Private Keys. You can try it yourself to encrypt/decrypt some messages using asymmetric algorithms.
Few of the commonly used asymmetric algorithms are listed in Table 10.2. It is important to note that not all asymmetric algorithms are used for encryption/decryption. Some of them are used for secure key-exchange and digital signatures, as explained in later sections.
Symmetric key algorithms are in general faster than asymmetric algorithms. However, the main challenge with the symmetric key algorithms is the distribution of the common secret key between the sender and the receiver. Without having the common secret key, symmetric key algorithms can't work.
On the other side, asymmetric algorithms are slower however they don't require the initial distribution of a common secret key. To optimize the performance, it is very common to use hybrid systems utilizing both symmetric and asymmetric key algorithms.
Asymmetric Algorithms | Encryption & Decryption | Digital Signature | Key-Exchange | Description |
---|---|---|---|---|
Rivest-Shamir-Adleman (RSA) | Yes | Yes | Yes | Widely Implemented (SSL/TLS) |
Elliptic Curve Cryptosystem (ECC) | Yes | Yes | Yes | Current US Government Standard, Requires Less Computing Resources (Fast) |
El Gamal | Yes | Yes | Yes | Slower Compared to Others |
Digital Signature Algorithm (DSA) | No | Yes | No | Used for Digital Signatures |
Diffie-Hellman | No | No | Yes | Widely used with IPSEC, SSH, PGP...etc. |
Figures 10.5 and 10.6 explain this concept in a simplified manner. Please note however that the generation of the session key is not implemented as such and more secure key-exchange protocols will be used like Diffie-Hellman Key Exchange. For simplicity purpose only, we assumed that the sender will generate the session key to be used in Phase 2 and will share it with the receiver using asymmetric encryption.
The sender needs to send a series of messages to the receiver. Using an asymmetric algorithm to encrypt every message will be slow and inefficient. The first message that the sender wants to send is "I LOVE CWNP!" However, the sender and receiver don't have a shared key to use symmetric encryption. The sender and receiver can use asymmetric encryption to generate a common session.
At the end of the first phase, both the sender and receiver have a common shared key, ElUPdRDwb, that was securely exchanged using asymmetric encryption as shown in Figure 10.5. Therefore, to encrypt the message "I LOVE CWNP!" and subsequent messages, the sender can use this key, ElUPdRDwb, with the symmetric-based algorithms as shown in Figure 10.6.
Asymmetric encryption was just used to securely send the session key. Symmetric encryption was used to actually encrypt the data.
It is important to note that it is challenging to apply conventional cryptographic encryption algorithms discussed above to small IoT devices like RFID tags, sensors, smart cards, etc. These cryptographic standards were optimized for desktops, servers, tablets, and mobile phones that have better processing capabilities.
Lightweight cryptographic techniques are proposed for these systems to cater to the constraints related to "physical size, processing requirements, memory limitation, and energy drain."
Below is a list of some of the lightweight cryptographic encryption algorithms:
The primary motivation of lightweight cryptography is to utilize fewer computing resources, less memory, and less power supply to provide security solutions on these constrained devices. So, these methods are usually simpler and faster compared to conventional cryptography. However, the main disadvantage of lightweight cryptography is less security.
The strength of asymmetric algorithms relies on their ability to enable secure communication between parties that don't have a shared secret key. This is made possible by having a Public Key Infrastructure (PKI). PKI is an infrastructure for the secure distribution of public keys that will be used in public-key cryptography, whether for secure key-exchange, asymmetric encryption, or digital signatures.
It consists of the collection of hardware, software, policies, processes, and procedures required to create, manage, distribute, use, store, and revoke digital certificates and public keys.
A simplified explanation would be that a PKI is a collection of servers and services that are configured and managed through policies to enable the processes of certificate generation, distribution, management, verification, and revocation. This is accomplished using several elements.
Certificate policy is the security specification that outlines the structure and hierarchy of the PKI ecosystem, along with the policies surrounding the management of keys, secure storage, handling of keys, revocation, and certificate profiles/formats.
It is a key component as it outlines the role of every component in the PKI infrastructure and helps ensure the proper security posture of the whole PKI infrastructure.
Example:
Digicert Certificate Policy
Certificate Authority (CA) The CA issues digital certificates representing the ownership of a public key. Usually, a hierarchy of trust is created where there is one Root CA and several intermediate or subordinate CAs. The Root CA is usually kept offline after it signs the certificates for its intermediate CAs. This helps protect the PKI infrastructure from any attack against the Root CA.
The intermediate CAs issue certificates to other CAs typically known as Issuing CAs, or they can act themselves as Issuing CAs. Issuing CAs are the ones that maintain, issue, and distribute digital certificates. These certificates are stored in a certificate database.
In case an Issuing CA is compromised, its certificate and all the certificates that it issued will be revoked. CAs can be internal to an organization or provided by a trusted third party. In case the CA is internal, the organization will have complete control over the lifecycle of the certificate, including requesting certificates, verification, issuing, renewing, revoking certificates, etc. This helps the organization build its own internal PKI and define the certificate policies as per its needs and requirements.
However, it will be hard to use this infrastructure to communicate with third-party systems since this Root CA will not be trusted outside the organization. The Root CA's public key needs to be shared with entities outside the organization to be able to trust this PKI infrastructure.
On the other hand, trusted third-party Root CAs are well known, and they are already included as trusted Root CAs in all modern operating systems. Below is a list of a few well-known, trusted Root CAs:
As such, an entity can request a certificate to be signed by one of these CAs. Once the entity receives the signed certificate from one of these Root CAs, it will be able to use it in communication with other entities. The other entities will be able to communicate with this entity since they both have a common authority which they both trust.
Registration Authority (RA) The Registration Authority (RA) validates the registration of a digital certificate with a public key. It is responsible to validate the identity of the requestor and approve or reject the certificate request. The RA handles issuance, revocation, or even renewal of certificates. Every time a request for verification of any digital certificate is made, it goes to the RA. The RA can also issue certificates for specific use cases depending on the permissions granted to it by the CA if it is acting as an issuing CA. For example, a certificate may be issued for digital signing, for file encryption, for authentication, and so forth.
RAs usually support various certificate management protocols to enable faster certificate enrollment and device deployment like:
X.509 Digital Certificate A digital certificate offers the communicating party the assurance about the identity of the other party that they are communicating with. The certificate is used to associate a public key to a uniquely identified subject. The standard for digital certificates is X.509, which identifies the fields and values to be used in the certificate. These fields include:
Optional extensions: These extensions, for example, include key usage, enhanced key usage, CRL distribution points, Certificate Policy, Subject Alternative Name, CRL Distribution points, and more.
Figures 10.8-10.11 show a sample digital certificate issued to the National Institute of Standards and Technology for the common name, nvd.nist.gov. The chain of trust for this certificate is shown in Figure 16; this certificate is issued by DigiCert SHA2 Secure Server CA whose certificate is issued by the DigiCert Root CA. The other details about this certificate are shown in Figures 17 and 18. It is important to note that the public key is part of this certificate.
Revocation Services Revocation services offer the mechanism to terminate the trust relationship with an entity by revoking its certificate. This is a common task that the PKI should offer. There could be many reasons why a certificate should be revoked like the private key was exposed, a device was compromised or retired, etc. This is usually achieved through cryptographically signed certificate revocation lists (CRLs) which are periodically generated by the CAs.
For example, in Figure 10.12, we can see the URL for the CRL Distribution point for Digicert. Accessing this URL will allow us to download the certificate revocation list file (crl). This file contains a list of all revoked certificates, as shown in Figure 10.13.
CRLs have many disadvantages including large overhead and delayed updates. The client device needs to search in the revocation list, which can grow quite large, to confirm that the serial number is not present. Also, CRLs are updated every 5 to 14 days which can leave a potential attacking surface until the next update.
The Online Certificate Status Protocol (OCSP) helps to resolve the disadvantages of CRLs. It allows the client devices to crosscheck with the CA to find out if a public key credential is still valid. The client device will send an OCSP request to the CA OCSP responder, which will reply with the status of the certificate: Good, Revoked or Unknown.
IEEE 1609.2 Certificate The PKI-based systems have historically relied on ITU-T X.509 certificates. However, nowadays, there is a new emerging IEEE 1609.2 standard that is specifically designed for the Industrial Internet of Things (IoT) use cases. Many current IIoT implementations still rely on X.509 certificates as they can offer robust identity and access control. The device manufacturers use the X.509 PKI to protect their devices.
However, the X.509 certificates require endpoints with sufficient storage and computational power which might not be readily available in small IoT devices. That's why a new IEEE 1609.2 certificate format has been proposed, and it has approximately half the size of a typical X.509 certificate. It still uses strong elliptic curve cryptographic algorithms like ECDSA and ECDH.
The IEEE 1609.2 standard defines secure message formats and trust infrastructure for wireless access in vehicular environments (WAVE) devices. This standard, however, can possibly be extended well beyond the connected transportation industry to include other use cases across other industries. The 1609.2 certificates store "devices permissions" not "devices identities" in the certificates.
Exact details about the 1609.2 certificate format are outside of the scope of CWISA; however, it was briefly mentioned here to explain that X.509 certificates are not the only certificates used in PKI.
Hashing for Integrity Sometimes our requirement is not concerned with the secrecy of the data being sent; rather we are only concerned with the intact delivery of the data from the sender to the receiver without any modification. Therefore, to ensure data integrity, hashing algorithms can be used. A hashing algorithm takes a variable-length input and generates a one-way fixed-length output that acts as the "digital fingerprint" for the input. The output is generally called a "hash" or "message digest." No key is involved in the process. You cannot take the hashed output and try to recover the original input or even know the length of the message.
Some of the key properties of hashing functions are one-way, irreversible, and collision-free.
A change of a single bit results in a completely different output. For example, changing one character in the message (like removing "!") results in a completely different hash output.
Note that we have used this website https://www.browserling.com/tools/sha2-hash to generate the hash for the examples above. You can try it yourself too.
The most commonly used hashing algorithms are listed below. NIST now recommends using SHA-256 or higher from the SHA2 family or any algorithm from the SHA3 family.
It is key to understand that hashing, unlike encryption, doesn't guarantee confidentiality. The message will be sent in cleartext. The goal of hashing is to ensure integrity. The sender sends the message in cleartext along with the hash of the message. At the receiver end, the receiver calculates the hash of the received message and compares it with the received hash to confirm that the message wasn't changed. If any bit in the data is changed, the hashed value will be different, and the receiver can determine that the message was altered.
It is very critical to understand the importance of data integrity in an IoT environment. An intentional or unintentional modification to the data generated by an IoT sensor or a modification to a command sent to a PLC can have direct consequences on system reliability, and possibly human safety.
Unkeyed hashing functions, discussed above, don't provide any way for the receiver to authenticate the message, meaning that it came from the original sender. An attacker can completely change the message and its hash value and send it to the receiver. The receiver will accept it without knowing that the message was altered since the hashed value will be correct. This is where the Message Authentication Code (MAC) and Digital Signatures explained in the following sections can help.
Hashing Algorithm | Message Digest Size (bits) | Description |
---|---|---|
Message Digest 5 (MD5) | 128 | Considered Weak - Still widely used |
Secure Hashing Algorithm 1 (SHA-1) | 160 | Considered Weak |
SHA-224 | 224 | Part of SHA2 Family |
SHA-256 | 256 | Part of SHA2 Family - NIST Approved - Most Popular |
SHA-384 | 384 | Part of SHA2 Family - NIST Approved |
SHA-512 | 512 | Part of SHA2 Family - NIST Approved |
SHA3-224 | 224 | Part of SHA3 Family - NIST Approved |
SHA3-256 | 256 | Part of SHA3 Family - NIST Approved |
SHA3-384 | 384 | Part of SHA3 Family - NIST Approved |
SHA3-512 | 512 | Part of SHA3 Family - NIST Approved |
RIPE Message Digest (RIPEMD-160) | 160 | No known weakness - Less commonly used though |
Whirlpool | 512 | No known weakness - Less commonly used though |
Finally, hashing alone can be very useful if the hash is transmitted over a different protected channel. This is commonly used with file downloads. Would you ever want to load a firmware for your devices that was manipulated while you were downloading it? For sure, your answer is no. Many websites include a checksum or signature along with the file to be downloaded, as shown in Figure 10.17. As such, you can use tools like QuickHash, Hashtools, HashCompare, Hash Calculator 2, etc., on your PC to compare the hash value of the downloaded file versus the hash value listed on the site. You can thus confirm that the downloaded file is not corrupted nor altered, and it matches the file that was hosted at the website.
Python provides a built-in hashlib library that can be used to perform MD5 (weak, don't use it), SHA-1, SHA-224, SHA-256, SHA-512, and some others without writing all the complex code for the algorithms. In fact, it can be implemented with two lines of code:
import hashlib
hashlib.sha256(b"I LOVE CWNP!").hexdigest()
The first line tells Python to use the hashlib library for the script. The second line generates and displays a digest using the SHA-256 algorithm against the text I LOVE CWNP!. It's really that simple in Python.
The output of the code presented is:
9636e10b54b1496b9ff94181f99b7e2d8a736b86ee894178ac86bb3abe062eeb
If you change the second line of code to read:
hashlib.sha256(b"I LOVE CWISA!").hexdigest()
You will get the following output:
87c719de55ffec2bd2a6822e64b6ca7a92ed3d7c4331410ee8be59cd43c0b2a3
As you can see, similar inputs generate very different outputs. This variance is the point of a hashing algorithm. To provide you with a way to uniquely identify an object. Hashing algorithms can be used to "send along with" data for verification, detect changes in files (often used by anti-malware software), or function in various ways within authentication systems. Understanding hashing will be important as you begin to explore various security concepts and technologies in your career.
Python is explained more in the final chapter of this book, and it is explored in depth in the CWIIP learning materials and certification.
Similar to the challenge of applying conventional encryption techniques to constrained devices, it is as well challenging to apply conventional hashing standards to small IoT devices. Actually, none of these approved hash functions by NIST, mainly SHA2 Family and SHA3 Family hash functions, are suitable for use in very constrained environments, mainly due to their large internal state size requirements. This has led to the development of hashing functions that are optimized for these environments.
Below is a list of some of the lightweight cryptographic hashing algorithms:
Message Authentication Code (MAC), also known as a keyed hash, ensures message integrity and protects against message forgery by anyone who doesn't know the secret key. This key is only shared between the sender and the receiver. MAC algorithms can be constructed in different ways like cryptographic hash functions, block cipher algorithms, universal hashing, and more.
Following is a list of the most commonly used algorithms where the ones highlighted in bold are the currently NIST approved algorithms:
To explain how MAC works, consider this example:
The sender calculates the hash value using both the message and a secret key, not the message alone. The sender then sends the message along with the calculated hash of the message and the secret key.
At the receiver side, the receiver calculates the hash using the received message and the shared secret key. If the calculated hash value matches the one sent by the sender, then the receiver is certain that:
The sender then sends the message with the calculated hash of the message and the secret key. At the receiver side, the receiver calculates the hash using the message and the known secret key. If the calculated hash value matches the one sent by the sender, then the receiver is sure that:
An attacker will not be able to change the content of the message and generate a valid hash since the attacker doesn't know the secret key. This is how MAC provides authentication, which wasn't available if a hashing function was used alone without a secret key.
To overcome these limitations, Digital Signatures are used. Digital Signatures provide both authentication and non-repudiation.
Note: For the above example, the website https://www.freeformatter.com/hmac-generator.html was used with the key
Qj8A2443
and the hashing algorithmSHA-256
. You can try it yourself.
Digital signatures can be used to provide message integrity, authentication, and non-repudiation. A digital signature is signed with a private key and verified with the corresponding public key. Only the holder of the private key can create this signature, since it is the only entity knowing this private key, and anyone knowing the public key can verify it. The main idea of a digital signature is that one entity can sign a message whereas any other entity can verify the correctness of the signature. A digital signature does not provide confidentiality. To understand how digital signatures work, we will use Figures 10.20 and 10.21. A sender calculates the hash of the message that he wants to send. The sender then encrypts the hash with the sender's private key. The sender can then send the message along with the signed hash. At the receiver side, the receiver calculates the hash of the message. In parallel, the receiver decrypts the signed hash using the sender's public key. The receiver compares both hashes. If the hashes are equal, the receiver can be sure that the message was sent by the sender since it is the only one possessing the private key. The receiver is also sure that no one altered the message since the hash values are correct. As such, a digital signature can provide authentication, integrity, and non-repudiation.
As mentioned earlier in Table 3, many asymmetric algorithms can be used to generate digital signatures. Some of these algorithms include:
The above-mentioned methods are not exclusive. There might be a requirement to provide authentication of the source and confidentiality for the message. Therefore, a combination of the above methods can be used like:
Encrypting the message then calculating the MAC based on the encrypted result. This is commonly known as Encrypt-then-MAC.
Creating a MAC for the message then encrypting the message and the MAC. This is commonly known as MAC-then-Encrypt.
Encrypting the message and calculating the MAC of the message and sending both. This is commonly known as Encrypt-and-MAC.
To complete our discussion of the key cryptographic technologies, it is important to mention the concept of a nonce. The nonce is very essential to prevent replaying old messages and ensuring message freshness. A nonce, "a number used once," is an arbitrary value that can be used only once in a cryptographic communication. It is often a random or pseudo-random number issued in an authentication protocol to ensure that old communications cannot be reused to perform a replay attack. They can also be used as initialization vectors and in cryptographic hash functions. For example, if the same message is encrypted by the same key, the same ciphered text will be obtained. However, if a nonce is used as an input to the encryption message along with the message, the ciphered text will be different even if the same message is being encrypted since the nonce is different. As such, an attacker will not be able to know that the same message is being sent. In addition, timestamps might also be used to help prevent replay attacks.
This section covered in brief the key cryptographic technologies that can be used to secure a network, whether it is wired or wireless. Table 10.4 shows a summary of these techniques and how they can help achieve the associated security goal. It is important to note that cryptographic techniques alone are not able to cover all security goals. We should design the network with multiple security layers that address the different requirements.
Security Goal | Encryption (Ciphering) | MAC | HASH | Digital Signature |
---|---|---|---|---|
Confidentiality | Yes | No | No | No |
Integrity | No | Yes | Yes | Yes |
Authentication | No | No | Yes | Yes |
Non-repudiation | No | No | No | Yes |
Kind of Keys | Symmetric or Asymmetric Keys | N/A | Symmetric Keys | Asymmetric |
As explained before, the authentication step acts as the gatekeeper for the other security tasks. With authentication, we can ensure:
Authentication can happen in different methods based on:
Below sections cover the most commonly used authentication methods. Some of these methods are better suited for human-to-machine types of authentication while others can work with both machine-to-machine or human-to-machine use cases.
Password-based authentication is the predominantly used method for a user to authenticate to a system. It falls under the category of "something a user knows." Every entity has an account/username and associated password. The user needs to login by providing the username and password. If the provided username/password combination is correct, the user is given access to the system. This is very similar to what we commonly use to login to any website that requires a username/password for authentication like Facebook, Gmail, etc.
This type of authentication is better suited for human-to-machine type of setup and is not ideal for IoT M2M type of step since:
Key-based authentication falls under the category of "Something a user has." This can be in the form of:
Shared Symmetric Key: This is the easiest form to produce at scale since the same shared key will be installed on all the devices. However, the risk of such a solution outweighs the benefits it brings, so this solution should never be practically used.
Symmetric Key: A symmetric key will be installed on the device and its associated backend platform. This will allow device-to-backend communication using this symmetric key. The challenge of this approach is how to ensure that the symmetric key is adequately secured on the device and on the backend platform.
Trusted Platform Module (TPM): TPM can be used to securely store keys or even X.509 certificates on the devices. This can offer a more secure authentication method as compared to Symmetric Keys.
Certificate-based authentication falls under the category of "Something a user has." It uses the PKI infrastructure where the public key is signed by a trusted CA, as explained in section 5.2. The entities can thus securely authenticate each other since they have certificates that are signed by a trusted CA. Certificate-based authentication greatly simplifies device identification and offers a very scalable solution.
Public-key-based digital certificates seem to be the best solution for the majority of IoT authentication use cases that don't involve resource-constrained devices. This setup acts as the foundation to secure the communication between IoT devices, and IoT devices and their cloud platforms. As explained previously, work is done to improve the PKI infrastructure to handle constrained devices like the enhancements being done in lightweight cryptography and in IEEE 1609.2.
Biometric-based authentication methods are being used more often nowadays, especially in areas where security is a top priority like border controls. This authentication method falls under the category of "Something a user is." These methods are as well used as a second-factor authentication in a multifactor authentication setup. Biometric-based setups include fingerprint, retina or iris scan, voice analysis, facial geometry, hand geometry, etc. These setups work well in human-to-machine cases and are not intended for machine-to-machine authentication.
For example, a biometric-based authentication setup can be used to authenticate a technician trying to access the control room. Similarly, we are now seeing more biometric-authentication used in consumer IoT devices like smart biometric locks. For example, Figure 10.22 shows some smart locks with biometric capabilities like voice activation and integration capabilities with Amazon Alexa, Apple Homekit, Google Assistant and Nest.
Smart Card is a credit card-sized ID that has an integrated circuit chip embedded in it. It falls under the category of "Something a user has." Most smartcards include a microprocessor and one or more certificates. The certificates are used for asymmetric cryptography, including encryption or digital signatures. Smart Cards are better suited for human-to-machine type of authentication and not machine-to-machine.
One-time Password (OTP) is a dynamic password that is only valid for a single session. It falls under the category of "Something a user has." For example, a user can have an OTP sent to his phone. Another option is to have a software token, like Google Authenticator, or a hardware token, like RSA SecurID, to get the OTP used to login to the system. Usually, OTP passwords are used as a second-factor authentication in a multi-factor authentication method. OTPs are better suited for human-to-machine type of authentication and not machine-to-machine.
Multi-factor authentication (MFA) is an authentication method that uses two or more factoring techniques; something a user knows, something a user has, or something a user is. For example, a user needs to provide his password and an OTP password that is sent out of band. Or a user needs to swipe a card and enter a PIN. A user might need to enter a PIN first then complete a fingerprint scan. Multi-factor authentication helps make it more difficult for the attacker to access the target. MFA is better suited for human-to-machine type of authentication and not machine-to-machine.
Authorization is the second step in the AAA process, and it relies on authentication. So, if authentication can be spoofed or impersonated, then the authorization step is almost useless. Authorization uses access control mechanisms to authorize access to resources. Therefore, it is important to discuss various access control concepts that are relevant to authorization.
Implicit Deny
Most access control systems use the "Implicit Deny" principle where access to resources is blocked by default unless it is explicitly allowed for a particular entity. For example, by default, no one is allowed to access your Dropbox files. You can, however, decide to share some files with specific people. Only those specific people will thus have access to these files. Other people will still be blocked from accessing the files. Similarly, a firewall by default blocks all traffic. However, traffic that you need to be permitted can be allowed. All other traffic type is blocked. Therefore, when we are deploying a wireless system, it is key to understand the components involved, their interaction in terms traffic flows and make sure that only needed services are exposed and everything else is protected.
Access Control Matrix
An access control matrix is a table that includes a list of:
For example, in Table 6, the subjects are User 1, User 2, and User 3. They are trying to access the resources or objects: Camera, Door Bell, Door Lock, and Thermostat. Each user will have a different privilege as indicated in the access control matrix. User 1, for example, will have full control over all the devices while User 2 has view-only privilege. User 3 has different privileges depending on the object he/she is trying to access. Please note that this example is simplified to explain the concept. Exact user privileges can be more or less granularly controlled based on the application. Also, instead of having the access matrix based on a particular user, it can be done based on a particular role, making it more scalable.
Therefore, when we deploy a wireless network, it is critical to identify the different resources that are available and make sure we assign the right privileges for the entities trying to access these resources.
Constrained Interface
Building upon the example above, User 3 shouldn't have access to the Door Lock. Therefore, the application should have a "constrained interface" where all the settings related to the door lock are removed from User 3's account or at least disabled or dimmed. This is the concept of constrained interface where, depending on the privilege of the user, certain features will be available or unavailable. This matches with issue number 9 "Insecure Default Settings" in OWASP IoT Top 10 highest priority issues in IoT deployments where many IoT applications will not have the options to restrict access in a granular manner.
Content-Dependent Control
Content-Dependent control checks the access of users to resources based on the content of the resource. For example, an email filter might allow emails to be sent. However, if an email contains a virus, the email filter will block the email even though the user has permissions to send an email. Another common use case is in subscription-based services. If the subscription expires, the user might still be able to login to the portal, but the services offered might stop working until the subscription is renewed. The latter example doesn't only impact authorization, but it also affects the overall availability of the system.
Context-Dependent Control
Context-dependent controls check the context of the request before granting access. For example, if we need to add time of day restrictions, we can use context-dependent controls. The user might only have access to the system during working hours. After working hours, even if the user uses the right credentials, he/she might not be able to access the system. Another common example is restricting access based on restricted management IP addresses. Even if the user manages to get access to the management interface and tries to login with the correct credentials, the request will be denied if the connection is not coming from a whitelisted IP. This concept can be used for instance to protect the management network that will be used to manage and monitor the wireless network.
Need to Know
This concept states that subjects should be given access only to what they need to know to complete their job. For example, a technician is installing a new wireless network or a new camera. If he can check that the system is operational after install without the need to login to the backend management interface, then there is no need for the technician to have access to the backend management interface. By limiting access to the system to users who really need access to the system, chances of data leakage or unintentional human errors will be minimized.
Least Privilege
Least privilege concept states that once an entity is authorized, it should be given the lowest privilege needed to complete its task. For example, the security guards checking the cameras should have access to view the camera feeds. However, they shouldn't necessarily be given full access to the DVR where they can delete some recordings. If the system needs internet access to function, then internet access can be provided. However, if it can work without internet access, internet access can be blocked. As such, the potential attack surface will be reduced.
Principle of Segregation of Duties The principle of segregation of duties ensures that sensitive functions are divided across multiple employees, each performing a subset of the tasks. For example, in deploying a new PKI environment, securing the private key of the root CA is a very critical function. If this key is compromised, the whole PKI infrastructure will be useless, so we can't rely on a single employee to complete this function. Usually, n-of-m controls are implemented where n employees out of m need to collaborate to access the key.
Authorization Maps Permissions to Entities Regardless of how authorization is performed, the end goal of authorization is to map the right permissions to entities. The permissions can be based on individual users/devices or based on groups of users/devices. Commonly, role-based access control (RBAC) mechanisms are used where each role has a defined set of permissions. A user is thus assigned a role based on its group memberships, and accordingly, it gets the permissions linked to those roles.
In general, having the authorization done at the group/role level helps minimize errors and ensure consistency as there will be fewer roles to manage rather than applying a policy at a specific user/device level. Some of the ways authorization maps devices/users connecting to the network to roles include using authentication servers like RADIUS, TACACS+, or DIAMETER.
Nowadays, almost all devices and systems being built have some sort of HTTP API to facilitate integrations with third-party systems. Therefore, it is key to not only authorize physical connections but also authorize API connections. This is where the OAuth 2.0 framework helps.
OAuth 2.0 is an authorization framework that enables third-party applications to obtain limited access to user accounts on an HTTP service. It works by delegating user authentication to the service that hosts the user account and authorizing third-party applications to access the user account. OAuth 2.0 provides authorization flows for web and desktop applications, and mobile devices.
OAuth defines four roles:
Resource Owner: The resource owner is the user who authorizes an application to access his/her account.
Client: The client is the application or website that wants to access the user's account. However, the application must be authorized by the user before being given access, and the authorization must be validated by the API.
Resource Server: The resource server hosts the protected user accounts.
Authorization Server: The authorization server verifies the identity of the user then issues access tokens to the application.
Let's take an example from https://developers.nest.com/guides/api/how-to-auth. The resource owner is the user on the left in this case. The client is the application "Your Product." The resource server and authorization servers are shown as "Nest Cloud." At the end of these exchanges, the application "Your Product" will have a token that can be used to call Nest APIs. The application will have permissions as granted by the user.
Many IoT protocol extensions like CoAP (Tschofenig-ACE) and SASL (SASL-OAUTH) are also being modified to fit into the Auth authentication and authorization framework. This can be very useful in resource-constrained devices.
Like any system or network deployment, it is critical to deploy proper monitoring tools to monitor the system or network post-deployment and raise alerts in case of an anomaly or issue. The monitoring tools should monitor all the components of the solution to ensure availability, security, and proper functionality. Exact criteria to be monitored depends on the deployed system, but in general as a minimum the following information needs to be monitored where applicable.
Wherever certificates are used, proper monitoring tools that check and monitor certificates validity should be used to proactively detect certificates that will expire.
The time invested in properly deploying a monitoring solution and tuning it will be greatly valued when an issue occurs. A properly configured monitoring solution will help pinpoint issues faster and thus minimize the time needed to resolve them. As such, this can help achieve higher system availability. It is, therefore, crucial to spend a considerable amount of time in this process trying to understand the dependencies and make sure all the dependencies are monitored.
Moreover, integration with third-party monitoring tools, logging systems, helpdesk ticketing, and notification systems, like SMS, should be completed. It is critical to make sure that time is synchronized on the deployed system, preferably via NTP,
In this chapter, we covered the key security concepts that should be addressed to secure wireless networks. We then discussed the core security goals that any security design should address. Afterward, we explained some key cryptographic technologies like encryption, hashing, MAC, digital signatures, PKI, etc. that can be leveraged to achieve some of the security goals. Finally, we explained various authentication techniques, authorization concepts, and the importance of deploying a proper monitoring solution.
Objectives Covered:
Technology has evolved as a tool to bridge humans into a faster and more interconnected web of networks. Wireless technologies and solutions have played a big role in helping untether end-user devices and bridge connectivity where it was previously hard or costly to do so with wired communication technologies. Some wireless technologies allow for automated setup, as well as automatic configuration for wireless links, whether they are simple point-to-point links or more complicated mesh links. No matter what wireless technology is employed, any solution could be susceptible to errors, faults, and malfunctioning. Issues could arise in the wireless setup itself or could originate from any related system or link in the whole communication channel. Our objective in this chapter is to avoid turning issues into problems. A problematic wireless setup is counterintuitive since we are utilizing wireless to decrease costs, save time, and save effort, while issues and problems will eventually lead to increased costs and defeat the purpose of implementing wireless solutions. To do that, a proper troubleshooting process with the right diagnosis of the possible issues that might arise is important to cover. Moreover, and before delving into the troubleshooting process itself, we want to see how implementing best practices in setting up and configuring wireless implementations can avoid leading to issues and problems in the first place. In the end, a system with all its components could only be as good as it is designed. In this chapter, we will cover best practices ranging from the technology itself as well as logistical and prerequisite processes that are related to such setups. We will also discuss how to troubleshoot different issues that might arise in wireless communication technologies while having a proper troubleshooting approach.
A proper solution design requires technical experience, as well as the right equipment and solution components to build the solution. A well-designed solution should also take into consideration the resource and solution limitations. Therefore, since we can rarely exploit the maximum expenses for any given solution, a well-designed solution is usually an optimal one. There are many prerequisites for good design, and as mentioned before, a wireless solution is only as good as it is designed. An optimal solution should always follow the proper process. This section builds on and adds more considerations to those presented in Chapter 3. To help break down a design, we can refer to the simple Deming cycle for quality assurance. Both a solution designer and a solution troubleshooter need to follow proper planning and take the right action, control, and checking, and again take the necessary action to return to the cycle of optimizing a good wireless design.
The first part of the Deming cycle is planning. Planning can involve the actual design of any solution. Along with planning comes the consideration of solution requirements, solution limitations, logistics and overhead, timelines, tools, and other factors. Executing the installation and configuration of the planned solution comes in the next phase. Checking, or studying the results of the deployed solution, takes place in the third phase of the cycle. Any testing, post-installation surveys, and analysis occur in this phase.
Finally, any required troubleshooting and tweaking should take place in the fourth phase as a response to any findings from the third phase. This cycle can be continued to shift from a one-time project endeavor into a continuous operational optimization of any deployed solution. Implementers should also adopt this cycle to enhance their approach to different projects, creating their own template from best practices that suit various projects and designs based on their experience.
Some of the common wireless solution planning and implementation tasks will be discussed next.
The choice of a specific wireless technology must be based on customer requirements. Gathering and analyzing the requirements of any solution is a major part of the planning phase. Matching the solution with the requirements means being prepared to design for the appropriate use cases, devices, and applications.
We discussed requirements in previous chapters and will not revisit them here. We also covered the basics of designing a wireless IoT solution. Here, we want to focus on implementation, which should be carried out in compliance with the design and requirements.
It is extremely important that the installation of the wireless IoT solution follows the design plan, which should be based on and justified by requirements that stem from user and organizational needs. If anything breaks in this chain, the installed wireless IoT solution is unlikely to meet those needs.
If, during an installation procedure, you believe that a particular configuration, mounting location, or other factor is incorrect, it is best to contact the designer and discuss the issue rather than making a change on your own. There is often a very good reason behind each decision—one that you may simply not have considered yet.
At the same time, a clear understanding of each wireless solution and its installation procedures is essential, or we will fail to deliver according to the design planning and customer requirements.
Having considered resource limitations, no shortcuts should be taken to deliver a solution faster if that means skipping important steps in the installation procedure.
Basic installation procedures can include:
This should be based on business and technical requirements. For example, a long-range communication protocol may be selected for a customer with multiple branch deployment needs, whereas a shorter-range protocol can be used within the same premises of a home or enterprise campus. A proprietary standard could be implemented, but due to cost limitations, the wireless solution designer might be compelled to choose an alternative technology or connectivity method.
Customer or specific use-case requirements might dictate the choice of underlying technology and equipment based on preferences for cost and the overall match with operating conditions such as performance and throughput, power consumption, environmental constraints, size, and licensing—whether vendor-specific, industry-specific, or regulatory.
Once the technology is selected, the appropriate equipment vendor must be chosen and prepared for implementation.
First, understanding the selected technology is mandatory for implementing the wireless solution. All other project factors and constraints will depend on this understanding. Therefore, it is crucial to have certified professionals working on the project within their area of expertise.
Second, every vendor has their own guidelines for handling their equipment. Only individuals trained on the selected vendor's equipment should handle it. Otherwise, mishandling and misconfiguration may disrupt the implementation—ranging from simple setup errors to severe outcomes like equipment damage or violations of industry or regulatory standards.
While attempting to reduce costs or speed up implementation, some may resort to non-compliant installation methods. For example, using uncertified mounting kits or, worse, improvising with basic tools can render an implementation faulty or unsafe. Whether it's a small IoT Bluetooth or Zigbee sensor, a larger outdoor Wi-Fi access point, or even GSM antennas, the correct mounting equipment specific to the vendor must be used. No shortcuts should be taken. Proper installation procedures must be carried out by certified individuals.
Different technologies require different equipment and configurations, but one constant in a proper design approach is the use of design and implementation surveys. From an implementation perspective, a survey may be performed to validate the installation rather than to design it. However, the validation survey may reveal necessary changes to the design. If this occurs, changes must be authorized and documented.
Different technologies come with their own requirements and tools for planning. Similarly, surveys are expected for every solution. Link budgets are a common element found in all solutions—they are key indicators of solution performance and communication throughput.
Depending on the technology, various tools can be used to carry out off-site predictive, as well as passive and active, site surveys. Some vendors may offer their own surveying and planning tools, while others may require the purchase of standalone applications or hardware to perform the surveys. Standalone survey tools can be used to design for multiple wireless technologies, such as Wi-Fi and Bluetooth, simultaneously. Other tools can provide spectrum analysis across different frequency bands to test frequency allocation and calculate link budgets.
At the same time, specific wireless technologies like 4G and LTE may require an entirely different approach. Engineers or technicians from the service provider, telecommunications company, or the vendor itself may need to be contracted to conduct these surveys using proprietary tools.
Site access and permissions are also critical considerations that must be addressed.
Specific solution considerations must be addressed to ensure a deployment is functional.
Licensing for installation should be handled where required. For example, installing antennas or digging a pathway for a wired backhaul across public property requires permission from various stakeholders. This planning must be factored into the implementation process flow, including cost, time, and effort.
Different wireless technologies utilize different frequency bands. While many rely on ISM bands, others require a license to operate in specific bands. Even technologies that typically use ISM bands—such as Wi-Fi—may require licensing when deployed in regions where regulatory domains restrict open use. In such cases, a frequency band that is free to use in one country might be regulated in another.
Obtaining the proper licenses from telecommunications authorities and regulatory bodies is essential to remain compliant with local laws and avoid costly disruptions.
Design variations must be considered for different vertical markets. Wireless implementations must comply with applicable standards and constraints specific to those industries. Many of these standards are tied to health and safety regulations, particularly in workplaces and environments where wireless solutions are deployed.
Occupational health and safety organizations establish standards to ensure the well-being of individuals, including employees and others affected by workplace activities. These regulations may be mandated by national or regional health and safety codes, depending on the country, region, or industry involved.
For example, in the United States, the Occupational Safety and Health Administration (OSHA) regulates private employers in all 50 states. Its core mandate is to provide employees with "employment and a place of employment which are free from recognized hazards that are causing or are likely to cause death or serious physical harm." Additionally, state-specific health and safety regulations may apply concurrently.
Key considerations for a wireless solution implementer include:
These core requirements should be integrated into the planning, execution, and evaluation phases of each project to ensure an optimal solution with controlled, minimal risks. Proper tools should be used by certified professionals to maintain full compliance.
Different countries and regions have their own equivalents of OSHA. For a comprehensive list, refer to:
en.wikipedia.org/wiki/Occupational_safety_and_health#National_legislation_and_public_organizations
OSHA and its counterparts are not the only regulatory bodies to consider. Industry-specific compliance codes and standards from organizations such as BICSI, ISO, PMI, and JCI must also be considered. These address broader goals like quality control, assurance, business continuity, and risk management.
Deploying a wireless solution is not just about implementing the wireless technology itself. While the solution is typically centered around the technology, it also requires the supporting hardware. Additionally, all systems necessary to make the solution function must be in place—ranging from the physical infrastructure, power supply, and data distribution systems that the wireless solution depends on, all the way up to the application layer where business use cases are realized for the end user.
If operational requirements—such as having a reliable power source and complying with regulatory standards—are met, then the implementation must also address the upper layers to ensure the solution performs optimally.
A wireless system without a wired backhaul—whether for connecting client devices, users, or terminating a point-to-point wireless setup—is ineffective unless the backhaul itself is a wireless link. Ultimately, every wireless connection must interface with a wired network. This wired network may be private (e.g., Ethernet, locally switched) or public (e.g., MPLS, municipal fiber optics, ISP-provided).
Depending on the solution and technology requirements, various backhaul types can be used. Some integrate seamlessly, terminating the wired network directly into the wireless equipment. Others require one or more intermediary devices to translate protocols and handle data routing and switching. The selection and configuration of the backhaul are often shaped by both budget constraints and technical performance requirements.
Wireless connectivity must be established using the correct equipment and configuration, in compliance with solution requirements and regulatory standards. Some systems offer intuitive, out-of-the-box configuration, while others require advanced staging and testing before deployment. While ease of setup may sound appealing, it can lead to future issues if critical configuration parameters are missed. Experience with the specific technology is invaluable—both during setup and in any necessary troubleshooting.
Understanding the capabilities of connected endpoints and clients is essential. Most wireless technologies support automatic configuration for clients by default. In certain cases, intermediary devices like hubs or routers must be configured on the distribution network before client devices can communicate properly. These hubs manage local, short-range wireless communication while bridging to a main wired distribution network that uses different communication standards. A common example is a Wi-Fi router that connects wireless clients and provides backhaul access via a wired network, converting frames between wireless and wired formats.
Custom configuration is often required in more advanced scenarios, such as mid-range wireless links. Assuming proper mounting and alignment—especially with line-of-sight (LoS) technologies—custom configuration must be performed to align with regulatory domain requirements for wireless operating frequency and gain. This ensures proper operation and compliance.
Advanced configuration options may include:
Longer-range wireless communication, such as that using GSM networks, depends heavily on infrastructure set up by telecommunications providers, mobile network operators (MNOs), or ISPs. In these cases, endpoint and client device configuration typically requires only a minimal feature set.
For instance, in NB-IoT systems operating over cellular radio, the GSMA provides design and configuration guidelines to help MNOs ensure global interoperability and standardized configurations. This supports reliable deployments and prevents common issues such as:
When it comes to network infrastructure, the wireless solutions administrator (CWISA) should adopt a service-minded approach—focusing on what is needed, rather than how it's accomplished. Several key infrastructure services are typically required to ensure wireless solutions function properly:
Authentication: The wireless solution may need authentication services such as 802.1X/EAP, Kerberos, LDAP directory access, certificate provisioning, or Internet-based authentication methods. In some cases, this may be as simple as opening firewall ports; in others, it may involve deploying a full multi-server architecture to support authentication needs.
Authorization: Wireless devices must be granted access to required network resources. Sometimes, the wireless nodes themselves are recognized as network identities and need direct access to servers, databases, files, and services. In other cases, an intermediary device acts as a proxy and is the only component needing authorization.
Accounting/Logging: Organizational policy may require that all network activity is logged. In such cases, wireless nodes must be identifiable so their actions can be logged and monitored.
Name Resolution: Wireless nodes using IPv4 or IPv6 require name resolution. DNS (Domain Name System) is the most common solution. Nodes use DNS to find the IP addresses of servers and controllers for firmware updates and other services. Proper hostnames must be added to the DNS zone. IPv4 host records are known as A records, while IPv6 host records are AAAA records.
IP Addressing: Wireless devices need IP addresses, whether IPv4 or IPv6. DHCP is typically used to assign these automatically.
Time Synchronization: Wireless devices sensitive to time variances may malfunction or produce inaccurate data without proper synchronization. For example, sensor fusion in analytics depends on precise time alignment. A service like Network Time Protocol (NTP) is often required.
File Access: Wireless nodes might need access to file storage systems to save logs or retrieve firmware and software updates.
Custom Service Access: Some wireless solutions rely on vendor-specific services that may be hosted on local servers, virtual machines, network appliances, or in the cloud. If local servers are chosen, they must be configured correctly within the network.
In large organizations, the CWISA may not configure these infrastructure services directly but should clearly document the wireless solution’s requirements for the responsible teams. In smaller organizations, the CWISA may directly configure services such as DHCP, DNS, authentication, and others necessary for the deployment.
As previously mentioned, many wireless networks today rely on cloud management. This includes technologies such as Wi-Fi, Zigbee, Z-Wave, LoRa, Bluetooth, 802.15.4, and various proprietary protocols. In some cases, vendors provide local management options, while in others, cloud-based management is the only available method. When working with cloud-managed systems, at least three key considerations must be addressed:
Licensing of the Cloud Service: The first step is to create an account with the cloud service provider. This allows for registration, authorization, and provisioning of wireless devices. In most cases, this involves an annual licensing fee. Carefully evaluate your contract to ensure it supports the number of devices intended for deployment. You do not want to be halfway through a rollout only to discover that you’ve hit a device limit and can’t register more equipment.
Connecting to the Cloud Service: Wireless devices—or their gateways—must have Internet access to communicate with the cloud. In some deployments, only the gateway requires external access, while in others, each individual device does. Ensure that all necessary firewall ports are open for this communication. Consult vendor documentation to verify specific configuration requirements.
Providing Sufficient Bandwidth: Adequate Internet bandwidth must be allocated to support the cloud-managed services. While many cloud systems are bandwidth-efficient, whatever the requirement is, it must be planned for. For instance, Monnit’s Cellular Data Calculator shows that 1,273 sensors communicating every 10 minutes with a gateway, which in turn contacts the iMonnit cloud every 5 minutes, consumes only 91.547 megabytes per month. Though small, this example highlights the importance of calculating bandwidth needs based on real data.
Cloud management offers centralized control and scalability—but only when configured thoughtfully with licensing, connectivity, and bandwidth in mind.
Some wireless solutions require additional configuration for advanced features, whether optional or mandatory. These features enhance the capabilities of the network but also introduce specific requirements and challenges that must be addressed during implementation:
Video: Wireless solutions that transmit video—such as surveillance systems or end-user video streaming—place heavy demands on network bandwidth. Proper Quality of Service (QoS) must be implemented to prioritize video traffic, and careful throughput planning is essential to avoid congestion and ensure smooth playback.
Voice: Voice communication is highly sensitive to latency and jitter. Low delay and consistent delivery are critical. In high-demand environments, deploying a separate network for voice may be ideal. In simpler deployments, well-configured QoS settings may suffice to support clear voice communication.
Captive Portals: Commonly used in Wi-Fi guest networks, captive portals require users to interact with a web page before gaining full access. This can be used for access control, advertising, or policy acceptance. Some portals simply redirect the user to a splash page without requiring action, while others enforce authentication or agreement to terms before allowing Internet access.
Location Services: Solutions using location tracking—via BLE beacons, Wi-Fi positioning, RFID, or sensor-based tracking—require extensive planning. Since these services assume mobility, full wireless coverage is necessary throughout all areas where tracked devices will be located.
Telemetry: Often used in industrial, transportation, and infrastructure deployments, telemetry solutions require that specific communication protocols be allowed through the network. Consult vendor documentation to ensure support for required protocols and proper network configuration.
Mobile Device Management (MDM): MDM systems provide centralized management and control over mobile devices. These systems must maintain communication with all managed devices. Network configurations must allow this bidirectional connectivity for effective operation.
Network Function Virtualization (NFV): NFV separates network functions—like routing and forwarding—from the physical hardware. Administrators configure the desired outcome via abstracted interfaces, without needing to understand how the function is executed at the hardware level. NFV is especially prominent in 5G networks, enabling flexible and scalable infrastructure.
Software Defined Networking (SDN): SDN provides abstraction and centralized control over the network’s data plane and control plane. Like NFV, SDN focuses on what should happen in the network rather than how it happens, but SDN addresses the overall architecture rather than granular operations. It’s commonly used to manage complex networks with flexibility and precision.
Container-Based Applications: Containers bundle all dependencies of an application into a self-contained unit, or sandbox, that runs on the target system. Popular container types include Docker, rkt, LXD, Windows Containers, and Hyper-V containers. You must ensure that your cloud provider supports your container type or that you have the necessary on-premise infrastructure. Proper network access to cloud platforms or internal container services is essential for successful deployment.
Different wireless technologies have been covered throughout this book—spanning over ten chapters. While it would be difficult to address every wireless technology and all potential implementation issues, it is essential for every wireless solution administrator to understand the OSI model. This foundational knowledge enables more efficient problem diagnosis by guiding the administrator through a structured troubleshooting process, as introduced back in Chapter 1.
Troubleshooting begins with problem identification, followed by fact gathering to uncover potential root causes. There are many troubleshooting methodologies—some vendor-specific, others tailored to your company or even your personal style.
Referencing the Deming Cycle, troubleshooting closely aligns with the Check phase—examining the performance and configuration of the wireless system when issues arise. The next step, the Act phase, involves applying a fix to address the root cause. Once the fix is implemented, return to the Check phase to monitor and verify stability before transitioning the system to production.
Adopting a troubleshooting methodology in harmony with the Deming Cycle ensures operations and resolutions are handled efficiently, thoroughly, and with proper documentation. This approach saves time, reduces miscommunication, and enables seamless continuity when multiple teams or individuals manage the system.
The CWNP troubleshooting methodology is especially useful here, helping wireless administrators identify, plan, resolve, and document the troubleshooting process in a systematic and repeatable way.
Identify the Problem: Clearly determine what is not working as expected.
Discover the Scale of the Problem: Assess how widespread the issue is—single device, multiple users, specific area, or system-wide.
Define Possible Causes of the Problem: List all plausible reasons for the issue, considering both hardware and software, environmental factors, and configuration errors.
Narrow Down to the Most Likely Cause: Use logic, past experience, and available data to isolate the most probable root cause.
Create Plan of Action or Escalate the Problem: Develop a strategy to resolve the issue, or escalate it if it’s beyond your scope or authority.
Perform Corrective Actions: Apply the necessary fix or adjustment to resolve the identified problem.
Verify the Solution: Test the system to ensure the corrective action has resolved the issue and no new issues have emerged.
Document the Results: Record the problem, root cause, actions taken, and the outcome for future reference and knowledge sharing.
Being able to identify the problem and then narrow down to the root cause of that problem are the two most important steps of the troubleshooting methodology. Otherwise, a lot of time, effort, and operational cost will be lost, incurring more financial losses. A bad experience for a home user of a wireless solution will likely cause frustration and lead the home user to post a bad review of their experience and drop the product altogether, which would negatively impact the vendor. Disruption of services for commercial users of a long-range wireless communications solution or product would cause them to shift to an infrastructure or product of a different ISP or MNO, incurring business losses for their original provider. Timely identification of the problem and its root cause is highly tied to a troubleshooter's understanding of the underlying wireless technology, as well as the entire system that provides the overall solution setup. While someone might be able to identify the cause of a problem from previous experience, others might need to test and troubleshoot different components of the solution to be able to come up with such identification. Basic OSI layer understanding comes into play here to help identify the correct layer where the problem is occurring and, as a result, pinpoint the root cause. Starting from the lowest layer, we can work our way up as we try to identify the cause and build on our own expertise for similar troubleshooting in the future. At the Physical and Data Link Layers, common issues related to hardware and the physical medium of the technologies deployed in the solution might be causing problems.
Damaged devices can sometimes be the most straightforward cause of a wireless connectivity issue. Failure to abide by a vendor's recommended practices for installation and operation could be one cause of faulty hardware. Hardware can also fail due to the end-of-life of the product, where devices have simply exceeded their expected lifespan until they fail. With outdoor deployments being a large part of wireless solutions, external factors could be affecting the function of the hardware. These could include:
As you can see, any of these different reasons might lead to hardware malfunction. Once identified, the root cause can be resolved by creating a plan, applying it, and documenting the process to fully address the issue. Following vendors' troubleshooting recommendations can help fast-track the resolution, while adhering to the recommended installation and operational guidelines will give any installation the expected longevity. Scheduled maintenance for hardware equipment can help identify and eliminate many causes of malfunction so that troubleshooting can be done proactively.
The medium which any wireless technology utilizes lies on the lower layers of the OSI stack and should be addressed early when troubleshooting problems. As mentioned earlier, different technologies use different frequency bands. Each technology might have its own mechanisms for frequency selection, hopping, spread spectrum, and interference detection and/or mitigation. However, when a solution faces issues, wireless interference should always be evaluated by the troubleshooter. Interference can be caused by the combination of different signal sources operating in the same space and at specific gain levels, leading to changes in noise levels and increasing communication errors to the extent of total communication failure—or what can be considered a denial of service (DoS). Different types of interference can be considered: Narrow-band: Affects a single or a few channels of communication frequencies, causing errors and disrupting communications on those frequencies. A high-gain signal operating on the same wireless frequency as another co-existent technology is an example. Wide-band: Affects an entire frequency band, leading to total failure of the wireless solution. A frequency generator or jammer is a typical example. * All-band: Affects the entire frequency band due to the nature of a technology utilizing all channels, leading to increased errors and disruption. For example, wireless technologies using a spread spectrum mechanism across the full band can disrupt other co-existing wireless solutions that hop between a few channels of the same band. The leading cause of interference is the operation of multiple wireless solutions using the same or different technologies within the same frequency space. Other interference issues can arise from improper frequency planning, incidental radiation from non-wireless devices such as motors or lighting, or misconfiguration—including full reliance on automated vendor solutions for frequency selection or mitigation mechanisms, the absence of such mechanisms, wrong operation modes, or incorrect regulatory domains. It may also stem from the selection of hardware built for a different region with incompatible frequency ranges, or failure to obtain proper licenses, which could otherwise ensure exclusivity of certain bands or channels. Wireless spectrum analyzers are critical tools that must be used during wireless communication planning and troubleshooting to detect any interference sources and identify their nature and origin. If the concern is tied to a specific technology, the appropriate spectrum analyzers with capabilities tailored to the relevant frequencies must be used. These might be standalone units provided by the same vendor or from third parties. Some vendors even offer integrated spectrum analyzers built into their products, complete with dedicated software or applications for analysis. If a tool is needed to cover all wireless technologies, one must look for advanced spectrum analyzers with the capacity to scan a wide range of frequencies—from 100 MHz to 60 GHz, for example—which usually come with a high price tag in the thousands of US dollars. Assuming all issues related to misconfiguration, hardware selection, and licensing are resolved, interference problems are typically solved by removing the main cause or source of the interference or by changing the operating channels of the affected systems. Once the interference source is identified, a proper procedure must be followed to remove it. If removal isn't an option, a suitable mitigation mechanism must be configured. Sometimes administrators attempt to resolve interference by using directional antennas on the end device and aiming them at the gateway or the next node in the mesh. This can solve certain interference problems, but not all. For instance, if the interferer is positioned between the end device and the next node, a directional antenna might amplify both the interferer and the intended signal, having no effect on the signal-to-noise ratio (SNR) and likely failing to solve the problem. It's always crucial to locate the source of interference before deciding on the appropriate plan of action.
Every wireless technology employs a recommended basic set of link budget, fade margin, and error capacity to match transmission capabilities and throughput speed based on the modulation used. What is common across all technologies is the requirement for sufficient signal strength and a low enough noise level to allow demodulation at the receiver's end—turning symbols into bits and passing them to the upper layers so the proper information can be processed. If a technology fails to meet the minimum requirements for total link budget, errors will occur when trying to demodulate the received signal, leading to communication disruptions. One major factor potentially violating this requirement is insufficient signal strength. To troubleshoot signal strength issues, we can break down the communication model into three main components: Transmitter: If the transmitted signal has lower power (or gain) than expected, it may not meet the link budget—especially when combined with other factors weakening the signal during transmission—making it unreadable or prone to errors. Receiver: If the receiver lacks adequate sensitivity or power to properly detect the incoming signal, communication errors may result. Receiver sensitivity plays a critical role here. * Wireless Medium: Variations in the medium, such as increased distances or altered free-space path loss (FSPL), can affect both link budget and fade margins.
Transmitter/Receiver Issues:
Wireless Medium Issues:
At the Network and Transport layers, connectivity issues related to routing and interconnecting different systems might be causing communication disruptions or complete drops. This behavior can be due to faulty drivers. If a device requires driver installation for operation, always check the vendor's website to see if updated drivers have been released that may resolve the issue. Tools that help diagnose and resolve driver-related errors include:
Upper layer issues can be identified all the way up to the application layer, where software issues or application misconfiguration can also lead to service disruptions. Logs from the application server, as well as from the end devices, can be helpful in troubleshooting such scenarios. If IP communications are used, a protocol analyzer may prove useful in evaluating transmissions to ensure that what should be communicated is actually being communicated. Tools that help resolve network errors include:
Software and firmware issues are also common in wireless networks today. The complexity of wireless networks "behind the radio" has increased significantly in recent years. Today, cloud solutions drive the networks, on-premises solutions drive the networks, and multi-tiered applications drive the networks. With this added complexity, we must consider both the software running on the devices and the software supporting those devices.
The software on the devices is often firmware. Depending on the device, a quick look at the vendor’s website may reveal just one or two firmware updates since the device's release—or it may reveal dozens.
BEYOND THE EXAM: It's the Firmware Stupid My name is Tom Carpenter, and I am a firmware failure. While I did not write this chapter for the CWISA Study and Reference Guide (though I did write several other chapters), during my role as general editor I thought it useful to share my non-wireless experience with firmware problems.
Sometime in 2017 or 2018, I built a new tower computer with a super powerful motherboard, a super-powerful processor, 64 gigabytes of RAM, tens of terabytes of drive space, and a beastly powerful video card (I needed it for work, honey... just in case my wife reads this). Since the build, I have reloaded the operating system at least six or seven times. On the second or third rebuild, I had to stop using the M.2 socket drive on the motherboard—it just quit working. So, I switched to 2.5-inch SSDs instead. But stability continued to be a major problem.
Several months ago, it finally dawned on me that I should check to see if the motherboard had any firmware updates. Oh my! There were dozens of updates beyond my version. More importantly, eight of them—yes, eight—were focused specifically on resolving problems with the M.2 socket.
Needless to say, the rest is history. I updated the motherboard firmware—specifically with the last update that addressed the M.2 socket issue—and guess what? The M.2 socket is working fine. I'm typing on that computer right now (see, it really is for work), and I’ve had no stability issues, even with more than 60 windows open at the same time while running several virtual machines.
The moral of the story is simple for wireless solutions: check the vendor website for firmware updates anytime you're having problems across multiple instances of the same devices running the same firmware. Don’t be a Tom Carpenter—be a firmware updater.
— Tom
Resolving software problems does not end with the device itself. The supporting software on the network must also be appropriately configured and free of bugs. While any software bugs must be reported to the vendor for resolution, the configuration is under your control. We will address this further in the later section titled Improper Configuration.
Many wireless solutions support APIs for software customization or access to gathered information. Custom software code is often more prone to bugs due to limited testing before being released to production. If such problems are detected, report them to the software developers promptly for resolution.
Faulty installation typically results from the improper placement of wireless devices or incorrect configuration, which we will discuss next.
When it comes to configuration, both the devices and the supporting services and network must be considered. Proper configuration begins with proper planning. Take the following steps to prevent problems:
Sometimes configuration problems arise after system upgrades. The configuration set that worked before the upgrade may become invalid due to changes in system functionality. Always review vendor documentation before performing upgrades to identify any configuration parameters that may be affected.
Tools that can be useful in troubleshooting configuration problems include:
A specific area of improper configuration is security. If end devices are not properly configured to match the network's security requirements, they will be unable to connect. Additionally, if Network Access Control (NAC) solutions are in use, they may prevent end devices from accessing parts of the network if they do not meet policy-based health parameters.
Always verify that security parameters are configured correctly when devices are failing to connect. This step is especially important when signal strength in the area is strong and no sources of interference have been detected. Before reviewing other configuration parameters, check the security settings.
Tools that can be useful in troubleshooting security configuration problems include:
In this chapter, you explore implementation through troubleshooting of wireless networks. The good news is that experience with any wireless technology helps in mastering any other technology. Whether you are troubleshooting Wi-Fi networks or wireless sensor networks, many of the same skills are required. Troubleshooting begins with understanding. If you do not understand how a system works, you cannot begin to investigate a problem and resolve it. Therefore, if you want to be a master troubleshooter, you should master the technologies that you support. Understand the operations throughout the networking layers and the functions of the various hardware components. In the next and final chapter, you will explore integration and automation options through the power of APIs and scripting/programming languages.
Objectives Covered:
Historically, wireless vendors (and some third-party developers) have built standalone systems that included all the pieces and parts to accomplish a fixed set of features and functionality. Over time, this led to these systems becoming increasingly large and complex as they were modified to bring additional functionality to the product. This feature sprawl caused them to grow into massive, resource-hungry, and often closed systems that provided almost no flexibility to meet the needs of evolving organizations.
Changes to modern applications have brought about a departure from these large, single-tier systems. Points of integration between applications are easily accessible through REST APIs, and standard methods of configuring the same features across multiple manufacturers enable integrators and developers to build integrations or enhancements quickly and easily for existing applications. Due to this drastic shift in knowledge requirements, the traditional silos are dying, and programming no longer belongs solely to application developers. Modern network integrators should have a basic understanding of not only the traffic flowing across the network but also every system attached to it—including the knowledge to retrieve information from all corners of the network and use it to increase operational efficiencies while reducing management overhead.
In this chapter, we will start by defining what an API is, common communication methods, and language selection when interacting with APIs. Then we'll cover types of data that may be important and why you may want to build an API integration. Finally, a brief overview of the various application and integration architectures used to connect disparate systems. This chapter will not go in-depth on any specific method, protocol, or language details, as that is beyond the scope of the CWISA exam. If you want to learn more than what is presented here, consider the CWIIP learning materials and certification.
While you've likely seen or heard the initialism API before, you may not know exactly what it means or even what the letters stand for. An Application Programming Interface, API for short, when used in the context of computing, is defined on Wikipedia as "a set of subroutine definitions, protocols, and tools for building application software. In general terms, it is a set of clearly defined methods of communication between various software components." In the world of networking, a more refined definition could be "a well-defined methodology for communication between multiple systems for the purposes of easily sharing information useful for the management, monitoring, and overall health of a network." Or even more simply, an API is an interface for sending or receiving network configuration or health information between systems or applications.
Before we jump straight into the deep end of building integrations, let's expand the definition of an API and how it works, starting with the categories of API.
In the book Continuous API Management, NGINX wrote a foreword that opens with the following:
"The API is now the connective tissue of the world's technology fabric. There are tens of thousands of public and open APIs that deliver a huge range of functionality to web and mobile applications, from weather data to betting odds to flight arrival times to voice connectivity. The total API universe is many times that size when you factor in closed APIs used for gated services. Microservices and Kubernetes are contributing to the explosion of the API economy, as well; APIs are the default communication modality for cloud native applications."
Clearly, in the author's view, APIs play one of the most important roles in the modern connected world, and we agree.
According to Jacob Beningo, an API defines a set of routines, protocols, and tools for creating an application. An API defines the high-level interface of the behavior and capabilities of the component and its inputs and outputs. An API should be created so that it is generic and implementation independent. One very important thing about APIs is that their inputs and outputs should not change unless absolutely required. When they change, calls to the API and the processing of responses must be reprogrammed to accommodate the change, which can be costly and time consuming.
When interacting with (or even building your own) APIs, one of the very first things that must be determined is what category or type of API you'll be working with. There are three general types of APIs, with a fourth combined type. While this high-level categorization won't necessarily affect the overall build of the integration, it will directly impact the type of authentication used, security requirements, and the locations your integration services need access to (e.g., internal network, VPN, Internet).
An open API is exactly what it sounds like—completely open to the public with no access restrictions. While less common, there are some data sources, such as weather information, that fall into this category. Some additional open APIs that might be useful include:
A partner API is one that is only available to business partners or customers. These APIs have access restrictions that require some form of authentication or verification before use. Partner APIs are typically accessible over the Internet, but the owner will provide credentials, a token, or another authentication mechanism before granting access. Examples of this type of API include services such as Twitter, Facebook, AWS, Cisco Meraki, or Mist Systems.
Internal APIs are systems that only expose data to systems housed within the same organization's network. With internal APIs, you'll most likely be dealing with a software appliance or application that resides in your data center. Systems such as Network Access Control (NAC) or location-based systems (LBS) will often have internal APIs.
A composite API is simply a combination of any of the three methods above. Aggregation of data from public, partner, and internal APIs into a single usable data source is considered a composite API.
When building an integration of systems, a standard communication method will be defined in the documentation for the remote system. In the vast majority of cases, this will be a web-based technology utilizing encrypted Hypertext Transfer Protocol (HTTP). The encryption method used will be Transport Layer Security (TLS), although it may still be referred to as Secure Sockets Layer (SSL), the now-deprecated predecessor to TLS. Both non-encrypted and TLS-encrypted HTTP are the common connection methods of the Internet. Utilizing an established HTTP connection, clients and servers will interact using REST, Webhooks, RESTCONF, OpenConfig, or many others.
With some APIs, HTTP is not utilized, and the remote service may be using a WebSocket. While similar to HTTP, WebSockets provide a full-duplex, two-way communication channel, enabling low-overhead communication between the server and the client over the same IP ports and, in some cases, the exact same web servers. In addition to basic two-way communication, WebSockets provide a mechanism for streaming messages, which can be necessary for data flows providing system telemetry.
Finally, non-HTTP-based methods exist, such as Message Queuing Telemetry Transport (MQTT) and Network Configuration Protocol (NETCONF). MQTT is a publish-subscribe messaging protocol designed for Machine-to-Machine (M2M) connectivity. Primarily used for Internet of Things (IoT) sensor-type devices, MQTT requires minimal resources and provides efficient distribution of information in a one-to-one or one-to-many model. NETCONF is an XML-encoded standard developed to provide mechanisms for installation, updates, and removal of configuration items specifically in network devices using the Remote Procedure Call (RPC) layer.
To start building any automation or integration, you must first choose the language you will be using. Often this is a trivial decision based solely on your familiarity and comfort level, but depending on the platforms, data conversion needs, or tasks that need to be completed, you may be forced outside of your comfort zone. In order to quickly adapt, you should be aware of a few high-level differentiators between types of languages, styles of programming, and the strengths and weaknesses each possess so you can effectively build the solution.
Yes, all scripting languages are programming languages, while the reverse is not necessarily true. There will be syntactical and idiomatic differences between all languages, but the most significant difference for our purposes is that scripting is done with interpreted languages versus compiled languages. A compiled language is one that is written and, before being run, is translated—compiled—into another target language and run directly by the host operating system. Interpreted languages, on the other hand, are essentially read by a service or runtime engine installed on the host operating system and interpreted into intermediate languages live—a process called runtime compilation. There is a bit more intricacy to the processes, but for this book that information is out of scope. Another name for these language types is Unmanaged for compiled and Managed for interpreted languages.
At the surface, managed languages may be more appealing since they can be easily run, don't require direct interaction with system resources, and have quite a bit more flexibility—but that comes at a cost. Runtime compilation allows for quick modification of code and much easier testing, but they are much slower. Unmanaged languages, typically being compiled directly down to machine code, not only run much faster but also allow (or require, depending on your point of view) more in-depth system functions like resource management and hardware access. This is due to the unmanaged approach to underlying functions—hence the name.
Common managed languages include:
Python: Python is one of the most popular languages for network automation, utility creation, robotic process automation (RPA), and IoT development. It is a general-purpose programming language that prioritizes readability and simplicity and has a wide community of users ranging from senior software engineers to data analysts. It has a tremendous ecosystem of libraries and frameworks, making it rare that you have to start from scratch to implement code solutions. For example, there are libraries that handle all the intricacies of encryption, hashing, database access, MQTT server access, and much more. It does require that Python and all required libraries be installed on the machine that runs the code.
Java: Java is a general-purpose, object-oriented programming language that can run on any platform for which a Java runtime is available. Java code is compiled, but not to a native binary. It requires the Java runtime to function. Programming in Java feels more like programming in C++ than an interpreted language, but it provides the portability of using a runtime. It is very popular as an enterprise development language, even though many have been frustrated by runtime version compatibility problems over the years.
PHP: PHP (Hypertext Preprocessor) is a server-side scripting language that is commonly used for web development and can also be used for Internet of Things (IoT) applications. In most IoT cases, it is used to build web server applications that interact with IoT devices and the data they generate. In some rare cases, it can be installed natively on IoT devices to perform scripting operations. Like Python, it can be easily integrated with protocols and services like MQTT and CoAP.
R: R is a programming language and software environment for statistical computing and graphics. It is widely used among statisticians and data scientists for developing statistical software and performing data analysis. In the context of IoT, R is used on the data analysis front to perform statistical calculations against IoT device-generated data. It provides modeling techniques, statistical tests, time-series analysis, and classification for data analytics.
JavaScript: JavaScript is a programming language most frequently used to create dynamic web pages where the code runs in the browser, unlike PHP, which runs on the server. However, it is also used in tools like Node-Red to create functions for IoT prototyping and production solutions. Node-Red is based on Node.js, which is a runtime library for JavaScript. JavaScript is supported in all major web browsers and is used as an embedded scripting language in many other solutions as well.
TypeScript: TypeScript is a superset of JavaScript that adds optional static types, class-based object-oriented programming, and other features. Developed by Microsoft to make large-scale JavaScript development more manageable, it is now an open-source project with a growing community of contributors. TypeScript code is transpiled (converted) to run on any browser or JavaScript environment.
Common unmanaged languages include:
C: A mature procedural programming language that compiles to native executables. Frequently used in firmware development, device driver development, and many other low-level hardware operations. It can be used to build entire operating systems and even other programming languages, like Python.
C++: The object-oriented big sister to C. The code looks very similar to C and it compiles to native executables but has support for classes, objects, and other object-oriented concepts.
Go (Golang): Go is an open-source language created by Google in 2007. It is simple, efficient, and designed to be scalable. Its syntax is similar to C and C++, and it can compile for Windows when coding on a Linux platform or for Linux when coding on a Windows platform. This cross-compilation capability is excellent for portability. Supported compilation targets include Windows, Solaris, OpenBSD, Linux, Darwin, and Android.
Pascal/Delphi: Pascal is the older of the two languages. Delphi uses enhanced Pascal syntax and was introduced in the 1990s as a visual development environment (similar to Visual Basic). Both Pascal and Delphi can compile to native executables and have a syntax somewhere between Python or BASIC and C++ in terms of complexity.
Besides being managed, scripting is a familiar method for most network admins. If you're at all familiar with network devices or Linux operating systems, you are already scripting in a manner of speaking. One common scripting language is Bash, the default shell and command language found on most Linux variants. Using Bash, you can write a set of commands to automate operating system interactions into a simple script that runs as if you were entering each command manually. Similarly, in network operating systems such as Cisco's IOS or Aruba's ArubaOS, the startup and running configuration can be considered scripts. The same commands you would manually enter in the shell are stored and executed in order. While a rudimentary version, these examples are a style of programming that is often called Functional or Procedural.
Functional programming is a style that separates data and behaviors. Any data created is immutable, or unchangeable, and the data is run through functions which return new data.
The alternative to functional programming is Object-Oriented Programming (OOP), which utilizes objects rather than basic data containers. These objects are structures based on and inherit properties from classes—frameworks for objects—that contain information (known as attributes) and code functions (known as methods) for data manipulation. An object is changeable and able to be stored and shared throughout the program.
Some languages handle each style slightly differently, so your chosen style must be taken into account before deciding which approach to use. While there is much debate about which is better or more powerful—OOP or functional—this choice depends almost entirely on the situation.
Haskell, Lisp, Scala, and Clojure are examples of functional languages. C, C++, Pascal, Fortran, BASIC, and COBOL are examples of procedural languages, though some may support OOP concepts as well. Java, C#, C++, Python, R, and Ruby are examples of OOP languages, though they can also be used to implement procedural and functional programming, or at least get very close to pure functional programming.
Ultimately, the question comes down to: what language do you know that can do what you require?
When reviewing the data that will be passed between the remote system and your integration, the makeup of the data is important. There are two universal types of data when programming—structured and unstructured. Structured data is made up of information with a clearly defined layout and format. Conversely, unstructured data is information that has no defined model or schema.
Structured data makes searching, manipulation, and storage easier to manage, while unstructured data generally has to be interpreted and parsed for meaning before anything can be done with it. An example of structured data for those unfamiliar with programming would be an address printed on an envelope. The information follows a standard format, and each field has meaning—from the street and number to the city, state, and zip code—each piece can be easily extracted and has a defined purpose.
Unstructured data could be represented by a simple text document or a post on a social media site. There is meaning in the information, but it isn’t clearly defined and requires some level of interpretation to extract.
One additional note: some information can be considered semi-structured. For instance, the message body of an email is plain text and therefore unstructured, but when you evaluate the entire message—headers and all—the data becomes semi-structured. Some languages handle unstructured data better than others, allowing the programmer to build models for the data and quickly parse it into structured data. This should be taken into account when deciding on a programming language.
Additionally, you must consider the data modeling languages often supported by the development environment. For example, XAML is most commonly used with .NET. If you already know .NET and XAML is an available return set from the system you plan to automate, it will likely be a good choice for you.
Finally, when deciding on a language (and this is probably the most important factor), experience, comfort, and access to helpful resources are going to be paramount. When developing an integration with a remote system, a specific language may be more capable and better suited, but if you do not have any background with it or the ability to learn it before writing the integration, you'll end up spending a lot more time than necessary getting it right.
Later in this chapter, we will explore details for a few of the most commonly used languages today, the potential need for data conversion between models or structures, and examples of how to use them.
Examples of structured data include:
Examples of unstructured data include:
There are many types of integrations, but in the world of networking and wireless, they will mostly fall into one of two categories: management or automation. While many off-the-shelf solutions you may be familiar with combine both into a single interface or platform, for the purposes of this book, we will address them individually.
"Measurement is the first step that leads to control and eventually to improvement. If you can't measure something, you can't understand it. If you can't understand it, you can't control it. If you can't control it, you can't improve it."
~ H. James Harrington
There comes a point where measuring how things work is paramount to increasing our understanding. Automation and programmability should first enhance our ability to measure system performance, providing data that aids our understanding and ultimately gives us the ability to decide what action is necessary.
Managing a large or distributed network requires the administrator to be in constant contact with their devices. By gathering a wide range of information and calculating metrics, they can maintain a clear view of the network's health and operation. Additionally, storing this information allows for the creation of a baseline—a point-in-time description of a device or devices used to define state and track subsequent changes—and for maintaining historical data for issue resolution and configuration tracking.
Historically, basic management and monitoring were done using antiquated polling methods, but in the age of programmatic networking and the DevOps engineer, better methods exist. Modern management and monitoring typically rely on a push/pull model of data gathering, where the integration either periodically polls or receives data sent from the managed system. The process performing this function is usually a daemonized (background) application or script running on a central server, storing information in relational databases, configuration files, or even plain text files.
The gathered information can include simple status (online/offline), device configuration state, health information, or access and authorization control logs. This data can be used to monitor and verify details such as configuration compliance against a baseline or to maintain records of management access.
More in-depth monitoring is often referred to as network telemetry or analytics. Telemetry is one of the easiest—and often overlooked—integrations that can be built. Telemetry (derived from the Greek tele, meaning "at a distance," and -metry, meaning "related to measuring") is defined on Wikipedia as "an automated communications process by which measurements and other data are collected at remote or inaccessible points and transmitted to receiving equipment for monitoring."
In the context of a wireless network, telemetry includes data such as client count, channel utilization, CPU utilization, traffic throughput, location-based information, or any number of key performance indicators (KPIs). Telemetry systems often serve as the storage layer for reporting on internal policy adherence or regulatory compliance. These systems commonly use a publisher/subscriber model (if available) for data collection and store information in a time-series database.
"The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency."
~ Bill Gates
Personally, I call this "engineering amplification" because of the way a properly implemented system can give a single administrator the power of many. Automation is the other type of common integration used by network administrators to perform repetitive tasks or procedures with minimal or no human interaction. The tasks can range from simple management routines to configuration updates to proactive operational optimizations.
The process can run as a background service (daemon), a manually initiated script by an administrator, or an automatically triggered script—either proactively or reactively—based on configured inputs.
Automation enables numerous advantages in modern networks, such as:
Automation is often mentioned alongside telemetry and machine learning as a way to detect anomalies faster and more accurately than any human could, allowing for precise configuration adjustments as well as security enforcement.
Automation systems come in many forms: custom in-house solutions, open-source packages, and enterprise software suites. Each offers unique benefits, and the decision on which to use depends entirely on goals, budget, and the size and scope of systems being integrated into the platform.
In the next chapter, we will explore a few freely available packages that can be deployed in various ways, including use within your own custom scripts.
Before beginning a project, an integration developer will need to find out the application architecture used by all systems to be involved, as well as the integration architecture in use by the organization, and work to understand it in detail. The application architecture will determine what, if any, hardware or software may need to be purchased, deployed, or altered, as well as the resource and security implications.
The integration architecture is a reference model for how multiple systems within an enterprise interact and share information (if at all). These individual systems can be directly related, indirectly reliant on one another, or completely independent from one another.
Care must be taken when building a greenfield integration where no application or integration architecture exists, to avoid introducing issues by choosing an architecture that doesn't scale with the organization or its applications. This decision could have long-lasting repercussions and lead to significant technical debt for the organization.
The application architecture can either be dictated by the applications and systems that will be sharing information or determined through design choices made by the integration designer. Based on the project and integration needs, it should be fairly straightforward to determine which type to choose.
A monolithic application is the most traditional, familiar, tried-and-true architecture and the way most enterprise applications have historically been deployed. By definition, it consists of a single system utilizing a narrow set of technologies and dependencies to provide a service or services for an enterprise using a shared codebase and libraries.
Monolithic applications are typically maintained by a focused team with deep institutional knowledge and familiarity with the entire system. While still common and relevant, monolithic applications are very difficult to scale and hard to extend with new features due to the tightly coupled codebase and the dependencies on system libraries and languages intertwined throughout the system. This architecture is slowly (very slowly) being phased out where possible in favor of more modular approaches.
Microservices are a group of loosely coupled services (each one similar to a monolithic application) separated into smaller, function-driven components. These systems are typically deployed in a containerized model (e.g., Docker, Kubernetes, AWS). Each component in this architecture is lightweight, modular, self-contained, and can be independently scaled based on needs—often dynamically. By making each part of the system independently deployable, a microservices architecture is highly scalable and able to react to sudden spikes in resource demands.
The disadvantages of this architecture can surface quickly and may be difficult to overcome. There is a measurable increase in the effort needed during the initial design, and without a proper plan, the levels of granularity can become overwhelming. Additionally, the added complexity of a containerization service to manage the underlying systems and the testing required to validate the application as a whole are significantly increased.
Serverless applications are event-driven systems that separate the application from the underlying resources, removing the need for "always-on" server components and providing a highly available, on-demand application environment. These applications are run on Function-as-a-Service (FaaS) and Backend-as-a-Service (BaaS) offerings such as AWS Lambda, Google Cloud Functions, and Azure Functions. They are easy to deploy, cost less than traditional cloud offerings, and scale automatically as load increases. However, due to the cloud provider nature of this architecture, vendor lock-in is almost guaranteed.
As mentioned above, the integration architecture is an important decision to make—especially if it has not been established already.
In a Point-to-Point integration architecture, every application is tightly coupled to its partners, meaning it has hard-coded connections to every other application it needs to communicate and share data with. In this architecture, all messaging and data transformation are specifically designed and handled to work between the individual member systems. Managing each integration individually, including tracking member application revisions, will eventually cause complexity to become overwhelming.
The benefit of a point-to-point architecture is that a reference design will likely exist for gathering data from one system and sharing it directly with another. If a reference design does not exist, it is fairly simple to build a proof-of-concept integration that can be translated directly into a production environment. The disadvantage of this model, however, is that while it can and does work quite well at a small scale, scalability becomes highly problematic as systems grow and more applications are added to the enterprise.
Much more flexible than a point-to-point integration, a Service Bus or Enterprise Service Bus (ESB) architecture provides scalability through a central system that brokers communication between individual applications. In this architecture, all messaging and data transformation are handled by the broker. Each application has a connector to the ESB that acts as a client and/or server depending on its role in the enterprise.
If a single application undergoes revisions that change the way data is presented or consumed, the only required update is at the broker—remaining transparent to all other applications. This architecture is highly scalable and can often improve application performance and reduce development cycles, as launching a new integration does not impact any other systems.
While more efficient than a point-to-point architecture, the service bus has its own disadvantages. It requires custom data translation models to be built for each application sharing or requesting information, and it is not as easily deployed in a proof-of-concept environment that mirrors production systems.
It may seem like the service bus architecture is the only logical choice based on the benefits listed here, but that is not the case. There are valid reasons to choose one architecture over another, including regulatory data controls, API category, or project budget.
Integrations and applications based on any of the above architectures can be designed in various ways that separate individual functions into multiple tiers—or combine them into a single-tier structure.
An application can be divided into logical layers based on distinct functions within the architecture. The commonly accepted layers are Presentation, Business Logic, and Data Access.
Presentation Layer
The presentation layer is where the user or remote service interacts with the application. Typically running at this layer is a web server that relays requests from the client to the business logic layer and back. It provides any transport-dependent translation to the data being received or returned and packages it accordingly. Examples of common web servers deployed at this layer are Apache and Nginx.
Business Logic Layer
The business logic layer is where rules are applied that determine how data is created, stored, and changed. It enforces the methods and data routing used to provide information to the presentation layer. This is where software development occurs and interacts with the upper and lower application layers.
The data access layer is where logging, traffic routing, and any other services required to support the business logic layer occur. This is where database and file system access is handled. The specific file system will be operating system and hardware dependent but will operate the same way for all; a request from the business logic layer to retrieve a file will be handled, and the file data returned.
As for the databases that may reside here, quite a few different types exist; relational, non-relational, and time-series are the primary ones we're concerned with in the context of network systems integrations.
SQL (Relational) Database
Relational databases are collections represented by a schema that defines the structure of data types and attributes contained in tables consisting of rows and columns of information, logically similar to a spreadsheet. These databases can be accessed and updated using a Structured Query Language (SQL). The name "relational" comes from the use of keys to reference data in other tables, rows, or columns. This allows the data to stay relatively compact and quickly accessible while maintaining some level of structured information. Examples of SQL databases include:
NoSQL (Non-relational) Database
Opposite to the strict schemas of relational databases, non-relational databases are schema-agnostic. There are several subtypes of NoSQL databases that store and access data in slightly different ways, but the common feature is that they all allow unstructured and semi-structured data to be stored and manipulated. NoSQL architectures allow for greater horizontal scaling and performance but often sacrifice immediate consistency between the nodes for eventual consistency of data. Examples include:
Time-series Database (TSDB)
A time-series database is a system specifically designed to efficiently store and index data that has a timestamp associated with each entry. This is especially useful for telemetry/analytics data where a given KPI changes over time and you may want to create baseline information for automation systems to be able to react to drastic changes or to create visualizations for management dashboards. Examples of TSDBs include:
Single-Tier
A single-tier application is one where all application layers and data reside locally (or on shared storage), and everything is self-contained. This application can have multiple services running, performing as different application layers (e.g., web server, database, application logic), all sharing resources.
Multi-Tier
A multi-tier application is one where the layers of an application are divided into separate systems. This allows for systems that are more easily deployed and scaled. While multi-tier doesn't necessarily describe how the architecture is laid out in a precise manner, its type can be subdivided into more specific types:
In addition to the application layers and their associated services, the operating system runs additional services to provide basic functionality. While not a part of whatever application is configured to run on the system, these services provide essential and required functionality for an application to run. Below are some of the common services that are important to know, but this list is by no means comprehensive.
The operating system manages network communications, ensuring that resources are correctly shared between all applications requesting access and properly routing packets up and down the stack. Underneath this service are several other services, including the Dynamic Host Configuration Protocol (DHCP), Domain Name System (DNS), Network Time Protocol (NTP), and the system firewall.
DHCP Service
The DHCP service (if configured) manages the request for dynamically assigned network addresses. If configured, this service will request an address from a remote DHCP server and use the response to configure the local network interface with the appropriate network, host, DNS, and time server addresses, as well as any other optional services that may be configured.
DNS Service
The operating system runs a DNS lookup service that relays queries for domain name to IP address translation from the applications to the configured remote (or local) servers.
NTP Service
A service that maintains operating system time synchronization with internal or Internet-based time servers. Time-sensitive applications will require time to be synchronized across all participating systems.
Firewall
A network security service designed to monitor and police incoming and outgoing traffic based on a set of predetermined rules. This service may provide both network- and application-layer filtering to permit or deny traffic based on a series of rules. A network-based rule will permit or deny traffic based on the source or destination IP address of the traffic, and can be configured as broadly as a large network segment or as granularly as an individual host. An application-based rule will permit or deny traffic based on the source or destination TCP/IP port of the traffic.
A container is a solution that allows you to combine an application and its dependencies into a single, self-contained package. The package with the application and dependencies, known as a container, can be placed on any machine having the compatible container engine and executed without requiring modifications to the host system. For example, Docker is the most popular container technology, and if you create a Docker container, you can share it with many others who can run the application(s) it contains without installing them or their dependencies on the system as long as they have the Docker engine installed as well.
Containers are often compared to virtual machines because they provide a way to run multiple isolated systems on a single host. However, unlike virtual machines, containers do not include a separate operating system for each instance. Containers use the host system's kernel and share the same operating system. Therefore, containers are much more lightweight and efficient than virtual machines, as they don't require the overhead of a full operating system for each instance.
As stated, Docker is the most popular containerization technology and platform, but it's not the only solution. Containerization solutions such as containerd, CRI-O, runC, and rkt are also container solutions, and this list is not exhaustive.
Sometimes Kubernetes is confused with Docker and vice versa. Kubernetes is a container orchestration solution. It supports container runtimes other than Docker. In fact, Docker uses OCI-compliant images and doesn't have its own unique format. Therefore, the images work in Kubernetes because it supports OCI images.
The major advantage of using containers is their portability. A containerized application can be easily moved from development to test, and finally to production. It also allows easy scaling of the application by running multiple instances of the same container. Additionally, it provides a consistent environment for the application, ensuring that the same version of the application runs the same way, regardless of where it's deployed.
When you create a container using Docker, you add your application and its dependencies to the base image and use Docker to create a new container image. This new image contains everything needed to run your application, including the application code and any dependencies. However, depending on the scenario, the image may only work on the same operating system in which it was built. The details of this are beyond the scope of this summary, but you should study any container solution you choose to use and ensure that it will support images on the OS you're targeting.
Once the container is running, it behaves just like a normal OS process, except that it's isolated from the host system and other containers. The container has its own file system, network stack, and process namespace, which ensures that it cannot interfere with other processes running on the host.
Docker also provides easy-to-use management features, such as the ability to start, stop, and remove containers, inspect container logs, and view running processes. Docker Compose is another useful tool that allows you to define a multi-container application using a single compose file, making it much easier to manage complex applications.
Containers are widely used in production environments for running microservices, as each service can run in its own container, providing an easy way to scale and manage each service independently. This use is a primary application for IoT solutions on both the application and integration sides. They are also used for testing and development, as it allows developers to easily replicate production environments. This is particularly useful when creating a proof-of-concept for your IoT solution.
A proof-of-concept is a small-scale version of a system or a built version of a system using off-the-shelf components that proves the solution can work. Technically, the proof-of-concept is just a prototype or demonstration to show that the real thing can work. The real thing may end up being custom-built IoT devices with a run of 10,000 units. The proof-of-concept may be built from a controller board and a computer board to show how the solution would work. In such cases, firing up some containers to provide required infrastructure services can be quite helpful.
When accessing an API endpoint or passing data between applications, a known data type and structure must be used in order for the information to be usable by either side of the exchange. The defined data types and the structures that contain them dictate the programmatic understanding and available operations that can be performed on each piece of information. In simpler terms, a data type is a classification of a piece of information, while a data structure is a well-defined way of organizing a grouping of information.
While an entire book could be written (and likely has been) about the various data types and structures and how they translate to a particular programming language, for the purposes of this book, we will only take a high-level look at a few data types common to popular programming and scripting languages.
For an application or script to be able to understand and perform operations on a particular piece of data, that data must conform to a specific set of rules. These rules are defined by the type of data. At a high-level view of the most common scripting languages, data will fall into one of the following types:
A data structure is simply a way for data to be organized and stored for easy access to the contained information. By having a clearly defined structure, this information can be shared across processes or applications without the need for custom decoding, parsing, or mapping functions, as that can all be handled internally by the language and is transparent to the developer.
One of the most common data exchange formats in use today is JavaScript Object Notation, or JSON. While the name may suggest language dependence, JSON is completely language-independent. It is a lightweight format that is easy for both humans and computers to read and write. Due to its small size and ease of use, JSON is ideal for exchanging data between disparate systems or processes and has quickly become the de-facto standard for web-based communications.
As a common internet data exchange format, JSON's internet media type is defined as application/json
, and the most common file extension used is .json
. Starting in the early 2000s, JSON has gone through multiple iterations and, as of 2017, is defined by IETF RFC 8259. Using data types and structures common to nearly all modern languages used by developers today—including (but not limited to) Go, JavaScript, Java, Perl, and Python—JSON provides flexibility to both API producers and consumers by removing the need for either side to know or care what the other is using to write their applications.
The complete specification for JSON, as well as a listing of language-specific libraries and implementations, can be found at https://json.org.
JSON's data structure is comprised of 2 types of containers: objects and arrays. The values in both structural elements can be one of the supported data types or either one of the core data structures (object or array).
object
Example:
{"key": "value"}
array
["value_1", "value_2", "value_3", "value_4"]
JSON supports the following base data types: number, string, boolean, and null in addition to allowing nesting of the data structure.
number
string
boolean
true
or false
null
A representation of no value
Language-specific equivalents: None
, nil
Example:
{
"String": "This is a string",
"Number": 100,
"Boolean": true,
"Null value": null,
"Nesting": {,
"Array": ["This", "is", "an", "array", "of", "strings."],
"Object": {
"String": "This is a string inside a nested object."
...
}
}
}
[
"This",
"is",
"an",
"array",
"of",
8,
"objects",
{"key": "with mixed data types"}
]
A document-based, open standard, markup language developed by the World Wide Web Consortium that is both human and machine-readable, XML (Extensible Markup Language) is designed for both data storage and transport. While numerous variations exist today, such as RSS, SOAP, SVG, and XHTML, we will simply be focusing on the core data structure and types within XML.
As a common internet data exchange and data storage format, XML's internet media type is defined as application/xml
or text/xml
, and the most common file extension used is .xml
. With version 1.0 being released in 2008, XML has gone through several changes and updates over the years, and as of 2014, the latest revision (1.1) is defined by IETF RFC 7303.
The complete specification for XML can be found at the W3C Site and at the time of this writing, it is version 1.1 (second edition).
At its core, XML is a hierarchical structure consisting of a string of characters and is often referred to as a tree. For our purposes, an XML tree consists of elements, tags, attributes, and content. If you're familiar with HTML, XML will have a very similar feel due to the use of opening and closing tags around each element.
An element is comprised of a start (<tag>
) and end (</tag>
) tag enclosed within angle brackets, and any information contained within those tags is the element's content. All documents start with an XML element declaring the version and encoding type used in the document:
<?xml version="1.1" encoding="UTF-8" ?>
An example of a logical tree structure and its XML representation shown here:
Logical Structure
AccessPoint
XML Equivalent
<AccessPoint>
<Name>AP-01</Name>
<Radio_0>
<Band>2.4</Band>
<Channel>3</Channel>
<Power>20</Power>
<SSID>CWNP-Open</SSID>
<VLAN>10</VLAN>
<SSID>CWNP-Secure</SSID>
<VLAN>20</VLAN>
<Security>WPA2-PSK</Security>
</Radio_0>
</AccessPoint>
In addition to the elements and content, XML supports element attributes as a way to add name/value pairs of information to an element. This can reduce the length of the document as well as add more descriptive capabilities to your XML, as shown here using the same information as above but resulting in a much more compact piece of data:
XML Equivalent with Attributes
<AccessPoint name="AP-01">
<Radio_0 band="2.4" channel="3" power="20">
<SSID vlan="10">CWNP-Open</SSID>
<SSID vlan="20" security="WPA2-PSK">CWNP-Secure</SSID>
</Radio_0>
</AccessPoint>
Attributes must be unique per element, and if multiple are required in your data, you can use a comma, semi-colon, or space-delimited list (depending on the data provided). Below you can see a space-delimited list for the SSID attribute "security," which tells us this SSID utilizes two types of security: WPA-PSK and WPA2-PSK.
XML Equivalent with Multiple Attributes
<AccessPoint name="AP-01">
<Radio_0 band="2.4" channel="3" power="20">
<SSID vlan="10">CWNP-Open</SSID>
<SSID vlan="20" security="WPA-PSK WPA2-PSK">CWNP-Secure</SSID>
</Radio_0>
</AccessPoint>
Tags can also be set as empty by adding a forward slash before the final angle bracket. This can be used to simplify and compact your data even further. Below we can see the empty SSID element with an added attribute of "name" that replaces the element content.
XML Equivalent with Multiple Attributes and Empty Tags
<AccessPoint name="AP-01">
<Radio_0 band="2.4" channel="3" power="20">
<SSID name="CWNP-Open" vlan="10" />
<SSID name="CWNP-Secure" vlan="20" security="WPA-PSK WPA2-PSK" />
</Radio_0>
</AccessPoint>
A powerful data serialization language, YAML (YAML Ain't Markup Language) is very similar in function to JSON but primarily serves a different purpose. YAML is a document-based data storage structure that uses Python-like indentation and supports strings, integers, floats, lists, and associative arrays. It is a common configuration file type and is used for numerous open-source automation systems such as SaltStack, Ansible, and Nornir. YAML is quite useful for managing configurations and state when building your own automation or telemetry gathering systems.
Similar to JSON, YAML is built using common data types and structures to maintain language agnosticism and usability in your language of choice. The file extensions most commonly associated with YAML are .yml
and .yaml
, although the latter is considered best practice.
The complete specification for YAML can be found at https://yaml.org and at the time of this writing, it is version 1.2.
YAML's data structure is comprised of 2 types of containers: associative arrays and lists. The values in both structural elements can be one of the supported data types or either one of the core data structures (associative arrays or lists).
Associative Array
# Associative Array
key_1: value_1
key_2: "value_2"
key_3: 'value_3'
# Inline associative array
{key_1: value_1, key_2: 'value_2', key_3: "value_3"}
# List
# List of items
- value_1
- "value_2"
- 'value_3'
- value_4
# YAML also assumes a list based on indentation and a starting hyphen
- "value_1"
value_2
'value_3'
value_4
# Inline list of items
["value_1", value_2, value_3, 'value_4']
YAML supports the following base data types: number, string, boolean, and null in addition to allowing nesting of the data structure.
# String
Integer
Float