Micro-segmentation Archives - ZPE Systems https://zpesystems.com/category/micro-segmentation/ Rethink the Way Networks are Built and Managed Tue, 20 Aug 2024 10:52:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://zpesystems.com/wp-content/uploads/2020/07/flavicon.png Micro-segmentation Archives - ZPE Systems https://zpesystems.com/category/micro-segmentation/ 32 32 AI Data Center Infrastructure https://zpesystems.com/ai-data-center-infrastructure-zs/ https://zpesystems.com/ai-data-center-infrastructure-zs/#comments Fri, 09 Aug 2024 14:00:01 +0000 https://zpesystems.com/?p=225608 This post describes the key components of AI data center infrastructure before providing advice for overcoming common pitfalls to improve the efficiency of AI deployments.

The post AI Data Center Infrastructure appeared first on ZPE Systems.

]]>
ZPE Systems – AI Data Center Infrastructure
Artificial intelligence is transforming business operations across nearly every industry, with the recent McKinsey global survey finding that 72% of organizations had adopted AI, and 65% regularly use generative AI (GenAI) tools specifically. GenAI and other artificial intelligence technologies are extremely resource-intensive, requiring more computational power, data storage, and energy than traditional workloads. AI data center infrastructure also requires high-speed, low-latency networking connections and unified, scalable management hardware to ensure maximum performance and availability. This post describes the key components of AI data center infrastructure before providing advice for overcoming common pitfalls to improve the efficiency of AI deployments.

AI data center infrastructure components

A diagram of AI data center infrastructure.

Computing

Generative AI and other artificial intelligence technologies require significant processing power. AI workloads typically run on graphics processing units (GPUs), which are made up of many smaller cores that perform simple, repetitive computing tasks in parallel. GPUs can be clustered together to process data for AI much faster than CPUs.

Storage

AI requires vast amounts of data for training and inference. On-premises AI data centers typically use object storage systems with solid-state disks (SSDs) composed of multiple sections of flash memory (a.k.a., flash storage). Storage solutions for AI workloads must be modular so additional capacity can be added as data needs grow, through either physical or logical (networking) connections between devices.

Networking

AI workloads are often distributed across multiple computing and storage nodes within the same data center. To prevent packet loss or delays from affecting the accuracy or performance of AI models, nodes must be connected with high-speed, low-latency networking. Additionally, high-throughput WAN connections are needed to accommodate all the data flowing in from end-users, business sites, cloud apps, IoT devices, and other sources across the enterprise.

Power

AI infrastructure uses significantly more power than traditional data center infrastructure, with a rack of three or four AI servers consuming as much energy as 30 to 40 standard servers. To prevent issues, these power demands must be accounted for in the layout design for new AI data center deployments and, if necessary, discussed with the colocation provider to ensure enough power is available.

Management

Data center infrastructure, especially at the scale required for AI, is typically managed with a jump box, terminal server, or serial console that allows admins to control multiple devices at once. The best practice is to use an out-of-band (OOB) management device that separates the control plane from the data plane using alternative network interfaces. An OOB console server provides several important functions:

  1. It provides an alternative path to data center infrastructure that isn’t reliant on the production ISP, WAN, or LAN, ensuring remote administrators have continuous access to troubleshoot and recover systems faster, without an on-site visit.
  2. It isolates management interfaces from the production network, preventing malware or compromised accounts from jumping over from an infected system and hijacking critical data center infrastructure.
  3. It helps create an isolated recovery environment where teams can clean and rebuild systems during a ransomware attack or other breach without risking reinfection.

An OOB serial console helps minimize disruptions to AI infrastructure. For example, teams can use OOB to remotely control PDU outlets to power cycle a hung server. Or, if a networking device failure brings down the LAN, teams can use a 5G cellular OOB connection to troubleshoot and fix the problem. Out-of-band management reduces the need for costly, time-consuming site visits, which significantly improves the resilience of AI infrastructure.

AI data center challenges

Artificial intelligence workloads, and the data center infrastructure needed to support them, are highly complex. Many IT teams struggle to efficiently provision, maintain, and repair AI data center infrastructure at the scale and speed required, especially when workflows are fragmented across legacy and multi-vendor solutions that may not integrate. The best way to ensure data center teams can keep up with the demands of artificial intelligence is with a unified AI orchestration platform. Such a platform should include:

  • Automation for repetitive provisioning and troubleshooting tasks
  • Unification of all AI-related workflows with a single, vendor-neutral platform
  • Resilience with cellular failover and Gen 3 out-of-band management.

To learn more, read AI Orchestration: Solving Challenges to Improve AI Value

Improving operational efficiency with a vendor-neutral platform

Nodegrid is a Gen 3 out-of-band management solution that provides the perfect unification platform for AI data center orchestration. The vendor-neutral Nodegrid platform can integrate with or directly run third-party software, unifying all your networking, management, automation, security, and recovery workflows. A single, 1RU Nodegrid Serial Console Plus (NSCP) can manage up to 96 data center devices, and even extend automation to legacy and mixed-vendor solutions that wouldn’t otherwise support it. Nodegrid Serial Consoles enable the fast and cost-efficient infrastructure scaling required to support GenAI and other artificial intelligence technologies.

Make Nodegrid your AI data center orchestration platform

Request a demo to learn how Nodegrid can improve the efficiency and resilience of your AI data center infrastructure.
 Contact Us

The post AI Data Center Infrastructure appeared first on ZPE Systems.

]]>
https://zpesystems.com/ai-data-center-infrastructure-zs/feed/ 1
Why Securing IT Means Replacing End-of-Life Console Servers https://zpesystems.com/why-securing-it-means-replacing-end-of-life-console-servers/ Thu, 25 Jul 2024 18:56:28 +0000 https://zpesystems.com/?p=225461 Rene Neumann, Director of Solution Engineering, discusses why it's crucial to replace end-of-life console servers to protect IT.

The post Why Securing IT Means Replacing End-of-Life Console Servers appeared first on ZPE Systems.

]]>
Rene Neumann – Why Securing IT Means Replacing End of Life Console Servers

 

The world as we know it is connected to IT, and IT relies on its underlying infrastructure. Organizations must prioritize maintaining this infrastructure; otherwise, any disruption or breach has a ripple effect that takes services offline for millions of users (take the recent CrowdStrike outage, for example). A big part of this maintenance is ensuring that all hardware components, including console servers, are up-to-date and secure. Most console servers reach end-of-life (EOL) and need to be replaced, but for many reasons, whether budgetary concerns or the “if it isn’t broken” mentality, IT teams often keep their EOL devices. Let’s look at the risks of using EOL console servers, and why replacing them goes hand-in-hand with securing IT.

The Risks of Using End-of-Life Console Servers

End-of-life console servers can undermine the security and functionality of IT systems. These risks include:

1. Lack of Security Features and Updates

Aging console servers lack adequate hardware and management security features, meaning they can’t support a zero trust approach. On top of this, once a console server reaches EOL, the manufacturer stops providing security patches and updates. The device then becomes vulnerable to newly discovered CVEs and complex cyberattacks (like the MOVEit and Ragnar Locker breaches). Cybercriminals often target outdated hardware because they know that these devices are no longer receiving updates, making them easy entry points for launching attacks.

2. Compliance Issues

Many industries have stringent regulatory requirements regarding data security and IT infrastructure. DORA, NIS2 (EU), NIST2 (US), PCI 4.0 (finance), and CER Directive are just a few of the updated regulations that are cracking down on how organizations architect IT, including the management layer. Using EOL hardware can lead to non-compliance, resulting in fines and legal repercussions. Regulatory bodies expect organizations to use up-to-date and secure equipment to protect sensitive information.

3. Prolonged Recovery

EOL console servers are prone to failures and inefficiencies. As these devices age, their performance deteriorates, leading to increased downtime and disruptions. Most console servers are Gen 2, meaning they offer basic remote troubleshooting (to address break/fix scenarios) and limited automation capabilities. When there is a severe disruption, such as a ransomware attack, hackers can easily access and encrypt these devices to lock out admin access. Organizations then must endure prolonged recovery (just look the still ongoing CrowdStrike outage, or last year’s MGM attack) because they need to physically decommission and restore their infrastructure.

 

The Importance of Replacing EOL Console Servers

Here’s why replacing EOL console servers is essential to securing IT:

1. Modern Security Approach

Zero trust is an approach that uses segmentation across IT assets. This ensures that only authorized users can access resources necessary for their job function. This approach requires SAML, SSO, MFA/2FA, and role-based access controls, which are only supported by modern console servers. Modern devices additionally feature advanced security through encryption, signed OS, and tampering detection. This ensures a complete cyber and physical approach to security.

2. Protection Against New Threats

New CVEs and evolving threats can easily take advantage of EOL devices that no longer receive updates. Modern console servers benefit from ongoing support in the form of firmware upgrades and security patches. Upgrading with a security-focused device vendor can drastically shrink the attack surface, by addressing supply chain security risks, codebase integrity, and CVE patching.

3. Ease of Compliance

EOL devices lack modern security features, but this isn’t the only reason why they make it difficult or impossible to comply with regulations. They also lack the ability to isolate the control plane from the production network (see Diagram 1 below), meaning attackers can easily move between the two in order to launch ransomware and steal sensitive information. Watchdog agencies and new legislation are stipulating that organizations follow the latest best practice of separating the control plane from production, called Isolated Management Infrastructure (IMI). Modern console servers make this best practice simple to achieve by offering drop-in out-of-band that is completely isolated from production assets (see Diagram 2 below). This means that the organization is always in control of its IT assets and sensitive data.

A network diagram showing Gen 2 out-of-band is vulnerable to the internet

Diagram 1: Though an acceptable approach, Gen 2 out-of-band lacks isolation and leaves management interfaces vulnerable to the internet.

A network diagram showing how Gen 3 out-of-band secures network and management interfaces.

Diagram 2: Gen 3 out-of-band fully isolates the control plane to guarantee organizations retain control of their IT assets and sensitive info.

4. Faster Recovery

New console servers are designed to handle more workloads and functions, which eliminates single-purpose devices and shrinks the attack surface. They can also run VMs and Docker containers to host applications. This enables what Gartner calls the Isolated Recovery Environment (IRE) (see Diagram 3 below), which is becoming essential for faster recovery from ransomware. Since the IMI component prohibits attackers from accessing the control plane, admins retain control during an attack. They can use the IMI to deploy their IRE and the necessary applications — remotely — to decommission, cleanse, and restore their infected infrastructure. This means that they don’t have to roll trucks week after week when there’s an attack; they just need to log into their management infrastructure to begin assessing and responding immediately, which significantly reduces recovery times.

A diagram showing the components of an isolated recovery environment.

Diagram 3: The Isolated Recovery Environment allows for a comprehensive and rapid response to ransomware attacks.

Watch How To Secure The Network Backbone

I recently presented at Cisco Live Vegas on how to secure the network’s backbone using Isolated Management Infrastructure. I walk you through the evolution of network management, and it becomes obvious that end-of-life console servers are a major security concern, both from the hardware perspective itself and their lack of isolation capabilities. Watch my 10-minute presentation from the show and download some helpful resources, including the blueprint to building IMI.

Cisco Live 2024 – Securing the Network Backbone

The post Why Securing IT Means Replacing End-of-Life Console Servers appeared first on ZPE Systems.

]]>
Edge Computing Use Cases in Healthcare https://zpesystems.com/edge-computing-use-cases-in-healthcare-zs/ Tue, 23 Jul 2024 21:10:05 +0000 https://zpesystems.com/?p=225410 This blog describes six potential edge computing use cases in healthcare that take advantage of the speed and security of an edge computing architecture.

The post Edge Computing Use Cases in Healthcare appeared first on ZPE Systems.

]]>
A closeup of an IoT pulse oximeter, one of many edge computing use cases in healthcare
The healthcare industry enthusiastically adopted Internet of Things (IoT) technology to improve diagnostics, health monitoring, and overall patient outcomes. The data generated by healthcare IoT devices is processed and used by sophisticated data analytics and artificial intelligence applications, which traditionally live in the cloud or a centralized data center. Transmitting all this sensitive data back and forth is inefficient and increases the risk of interception or compliance violations.

Edge computing deploys data analytics applications and computing resources around the edges of the network, where much of the most valuable data is created. This significantly reduces latency and mitigates many security and compliance risks. In a healthcare setting, edge computing enables real-time medical insights and interventions while keeping HIPAA-regulated data within the local security perimeter. This blog describes six potential edge computing use cases in healthcare that take advantage of the speed and security of an edge computing architecture.

6 Edge computing use cases in healthcare

Edge computing use cases for EMS

Mobile emergency medical services (EMS) teams need to make split-second decisions regarding patient health without the benefit of a doctorate and, often, with spotty Internet connections preventing access to online drug interaction guides and other tools. Installing edge computing resources on cellular edge routers gives EMS units real-time health analysis capabilities as well as a reliable connection for research and communications. Potential use cases include:
.

Use cases

Description

1. Real-time health analysis en route

Edge computing applications can analyze data from health monitors in real-time and access available medical records to help medics prevent allergic reactions and harmful medication interactions while administering treatment.

2. Prepping the ER with patient health insights

Some edge computing devices use 5G/4G cellular to livestream patient data to the receiving hospital, so ER staff can make the necessary arrangements and begin the proper treatment as soon as the patient arrives.

Edge computing use cases in hospitals & clinics

Hospitals and clinics use IoT devices to monitor vitals, dispense medications, perform diagnostic tests, and much more. Sending all this data to the cloud or data center takes time, delaying test results or preventing early intervention in a health crisis, especially in rural locations with slow or spotty Internet access. Deploying applications and computing resources on the same local network enables faster analysis and real-time alerts. Potential use cases include:
.

Use cases

Description

3. AI-powered diagnostic analysis

Edge computing allows healthcare teams to use AI-powered tools to analyze imaging scans and other test results without latency or delays, even in remote clinics with limited Internet infrastructure.

4. Real-time patient monitoring alerts

Edge computing applications can analyze data from in-room monitoring devices like pulse oximeters and body thermometers in real-time, spotting early warning signs of medical stress and alerting staff before serious complications arise.

Edge computing use cases for wearable medical devices

Wearable medical devices give patients and their caregivers greater control over health outcomes. With edge computing, health data analysis software can run directly on the wearable device, providing real-time results even without an Internet connection. Potential use cases include:
.

Use cases

Description

5. Continuous health monitoring

An edge-native application running on a system-on-chip (SoC) in a wearable insulin pump can analyze levels in real-time and provide recommendations on how to correct imbalances before they become dangerous.

6. Real-time emergency alerts

Edge computing software running on an implanted heart-rate monitor can give a patient real-time alerts when activity falls outside of an established baseline, and, in case of emergency, use cellular and ATT FirstNet connections to notify medical staff.

The benefits of edge computing for healthcare

Using edge computing in a healthcare setting as described in the use cases above can help organizations:

  • Improve patient care in remote settings, where a lack of infrastructure limits the ability to use cloud-based technology solutions.
  • Process and analyze patient health data faster and more reliably, leading to earlier interventions.
  • Increase efficiency by assisting understaffed medical teams with diagnostics, patient monitoring, and communications.
  • Mitigate security and compliance risks by keeping health data within the local security perimeter.

Edge computing can also help healthcare organizations lower their operational costs at the edge by reducing bandwidth utilization and cloud data storage expenses. Another way to reduce costs is by using consolidated, vendor-neutral solutions to host, connect, and secure edge applications and workloads.

For example, the Nodegrid Gate SR is an integrated branch services router that delivers an entire stack of edge networking, infrastructure management, and computing technologies in a single, streamlined device. Nodegrid’s open, Linux-based OS supports VMs and Docker containers for third-party edge applications, security solutions, and more. Plus, an onboard Nvidia Jetson Nano card is optimized for AI workloads at the edge, significantly reducing the hardware overhead costs of using artificial intelligence at remote healthcare sites. Nodegrid’s flexible, scalable platform adapts to all edge computing use cases in healthcare, future-proofing your edge architecture.

Streamline your edge deployment with Nodegrid

The vendor-neutral Nodegrid platform consolidates an entire edge technology stack into a unified, streamlined solution. Watch a demo to see Nodegrid’s healthcare network solutions in action.

Watch a demo

The post Edge Computing Use Cases in Healthcare appeared first on ZPE Systems.

]]>
The CrowdStrike Outage: How to Recover Fast and Avoid the Next Outage https://zpesystems.com/the-crowdstrike-outage-how-to-recover-fast-and-avoid-the-next-outage/ Tue, 23 Jul 2024 13:22:34 +0000 https://zpesystems.com/?p=225420 The CrowdStrike outage on July 19, 2024 affected millions of critical organizations. Here's how to recover fast and avoid the next outage.

The post The CrowdStrike Outage: How to Recover Fast and Avoid the Next Outage appeared first on ZPE Systems.

]]>
CrowdStrike Outage BSOD

 

On July 19, 2024, CrowdStrike, a leading cybersecurity firm renowned for its advanced endpoint protection and threat intelligence solutions, experienced a significant outage that disrupted operations for many of its clients. This outage, triggered by a software upgrade, resulted in crashes for Windows PCs, creating a wave of operational challenges for banks, airports, enterprises, and organizations worldwide. This blog post explores what transpired during this incident, what caused the outage, and the broader implications for the cybersecurity industry.

What happened?

The incident began on the morning of July 19, 2024, when numerous CrowdStrike customers started reporting issues with their Windows PCs. Users experienced the BSOD (blue screen of death), which is when Windows crashes and renders devices unusable. As the day went on, it became evident that the problem was widespread and directly linked to a recent software upgrade deployed by CrowdStrike.

Timeline of Events

  1. Initial Reports: Early in the day, airports, hospitals, and critical infrastructure operators began experiencing unexplained crashes on their Windows PCs. The issue was quickly reported to CrowdStrike’s support team.
  2. Incident Acknowledgement: CrowdStrike acknowledged the issue via their social media channels and direct communications with affected clients, confirming that they were investigating the cause of the crashes.
  3. Root Cause Analysis: CrowdStrike’s engineering team worked diligently to identify the root cause of the problem. They soon determined that a software upgrade released the previous night was responsible for the crashes.
  4. Mitigation Efforts: Upon isolating the faulty software update, CrowdStrike issued guidance on how to roll back the update and provided patches to fix the issue.

What caused the CrowdStrike outage?

The root cause of the outage was a software upgrade intended to enhance the functionality and security of CrowdStrike’s Falcon sensor endpoint protection platform. However, this upgrade contained a bug that conflicted with certain configurations of Windows PCs, leading to system crashes. Several factors contributed to the incident:

  1. Insufficient Testing: The software update did not undergo adequate testing across all possible configurations of Windows PCs. This oversight meant that the bug was not detected before the update was deployed to customers.
  2. Complex Interdependencies: The incident highlights the complex interdependencies between software components and operating systems. Even minor changes can have unforeseen impacts on system stability.
  3. Rapid Deployment: In the cybersecurity industry, quick responses to emerging threats are crucial. However, the pressure to deploy updates rapidly can sometimes lead to insufficient testing and quality assurance processes.

We need to remember one important fact: whether software is written by humans or AI, there will be mistakes in coding and testing. When an issue slips through the cracks, the customer lab is the last resort to catch it. Usually, this can be done with a controlled rollout, where the IT team first upgrades their lab equipment, performs further testing, puts in place a rollback plan, and pushes the update to a less critical site. But in a cloud-connected SaaS world, the customer is no longer in control. That’s why they sign waivers stating that if such an incident occurs, the company that caused the problem is not liable. Experts are saying the only way to address this challenge is to have an infrastructure that’s designed, deployed, and operated for resilience. We discuss this architecture further down in this article.

How to recover from the CrowdStrike outage

CrowdStrike gives two options for recovering:

  • Option 1: Reboot in Safe Mode – Reboot the affected device in Safe Mode, locate and delete the file “C-00000291*.sys”, and then restart the device.
  • Option 2: Re-image – Download and configure the recovery utility to create a new Windows image, add this image to a USB drive, and then insert this USB drive into the target device. The utility will automatically find and delete the file that’s causing the crash.

The biggest obstacle that is costing organizations a lot of time and money is that with either of these recovery methods, IT staff need to be physically present to work on each affected device. They need to go one by one manually remediating via Safe Mode or physically inserting the USB drive. What makes this more difficult is that many organizations use physical and software/management security controls to limit access. Locked device cabinets slow down physical access to devices, and things like role-based access policies and disk encryption can make Safe Mode unusable. Because this outage is affecting more than 8.5 million computers, this kind of work won’t scale efficiently. That’s why organizations are turning to Isolated Management Infrastructure (IMI) and the Isolated Recovery Environment (IRE).

How IMI and IRE help you recover faster

IMI is a dedicated control plane network that’s meant for administration and recovery of IT systems, including Windows PCs affected by the CrowdStrike outage. It uses the concept of out-of-band management, where you deploy a management device that is connected to dedicated management ports of your IT infrastructure (e.g., serial ports, IPMI ports, and other ethernet management ports). IMI also allows you to deploy recovery services for your digital estate that is immutable and near-line when recovery needs to take place.

IMI does not rely at all on the production assets, as it has its own dedicated remote access via WAN links like 4G/5G, and can contain and encrypt recovery keys and tools with zero trust.

IMI gives teams remote, low-level access to devices so they can recover their systems remotely without the need to visit sites. Organizations that employ IMI are able to revert back to a golden image through automation, or deploy bootable tools to all the computers at the site to rescue them without data loss.

The dedicated out-of-band access to serial/IPMI and management ports gives automation software the same abilities as if a physical crash cart was pulled up to the servers. ZPE Systems’ Nodegrid (now a brand of Legrand) enables this architecture as explained next. Using Nodegrid and ZPE Cloud, teams can use either option to recover from the CrowdStrike outage:

  • Option 1: Reboot in Pre-Execution Environment Software – Nodegrid gives low-level network access to connected Windows as if teams were sitting directly in front of the affected device. This means they can remote-in, reboot to a network image, remote into the booted image, delete the faulty file, and restart the system.
  • Option 2: Re-image – ZPE Cloud serves as a file repository and orchestration engine. Teams can upload their working Windows image, and then automatically push this across their global fleet of affected devices. This option speeds up recovery times exponentially.
  • Option 3 – Run Windows Deployment server on the IMI device at the location and re-image servers and workstations if a good backup of the data has been located. This backup can be made available through the IMI after the initial image has been deployed. The IMI can provide dedicated secure access to the InTune services in your M365 cloud, and the backups do not have to transit the entire internet for all workstations at the time, speeding up recovery many times over.

All of these options can be performed at scale or even automated. Server recovery with large backups, although it may take a couple of hours, can be delivered locally and tracked for performance and consistency.

But what about the risk of making mistakes when you have to repeat these tasks? Won’t this cause more damage and data loss?

Any team can make a mistake repeating these recovery tasks over a large footprint, and cause further damage or loss of data, slowing the recovery further. Automated recovery through the IMI addresses this, and can provide reliable recording and reporting to ensure that the restoration is complete and trusted. 

What does IMI look like?

Here’s a simplified view of Isolated Management Infrastructure. You can see that ZPE’s Nodegrid device is needed, which sits beside production infrastructure and provides the platform for hosting all the tools necessary for fast recovery.

A diagram showing how to use Nodegrid Gen 3 OOB to enable IMI.

What you need to deploy IMI for recovery:

  1. Out-of-band appliance with serial, USB, ethernet interfaces (e.g., ZPE’s Nodegrid Net SR)
  2. Switchable PDU: Legrand Server Tech or Raritan PDU
  3. Windows PXE Boot image

Here’s the order of operations for a faster CrowdStrike outage recovery:

  • Option 1 – Recover
    1. IMI deployed with a ZPE Nodegrid device that will start Pre-Execution Environment (PXE) which are Windows boot images that the Nodegrid will push to the computers when they boot up
    2. Send recovery keys from Intune to IMI remote storage over ZPE Cloud’s zero trust platform easily available in cloud or air-gapped through Nodegrid Manager
    3. Enable PXE service (automated across entire enterprise) and define the PXE recovery image
    4. Use serial or IP control of power to the computers, or if possible Intel vPro or IPMI capable machines, to reboot all machines
    5. All machines will boot and check in to a control tower for PXE, or be made available to remote into using stored passwords on the PXE environment, Windows AD, or other Privileged Access Management (PAM)
    6. Delete Files
    7. Reboot

  • Option 2 – Lean re-image
    1. IMI deployed with a Windows Pre-Execution boot image running PXE service
    2. Enable access to cloud and Azure Intune to the IMI remote storage for the local image for the PC
    3. Enable PXE service (automated across entire enterprise) and define the PXE recovery image
    4. Use serial or IP control of power to the computers, or if possible, Intel vPro or IPMI capable machines, to reboot all machines
    5. Machines will boot and check in to Intune either through the IMI or through normal Internet access and finish imaging
    6. Once the machine completes the InTune tasks, InTune will signal backups to come down to the machines. If these backups are offsite, they can be staged on the IMI through backup software running on a virtual machine located on the IMI appliance to speed up recovery and not impede the Internet connection at the remote site
    7. Pre-stage backups onto local storage, push recovery from the virtual machine on the IMI

  • Option 3 – Windows controlled re-image
    1. Windows Deployment Server (WDS) installed as a virtual machine running on the IMI appliance (offline to prevent issues or online but under a slowed deployment cycle in case there was an issue) 
    2. Send recovery keys from Intune to IMI remote storage over a zero trust interface in cloud or air-gapped
    3. Use serial or IP control of power to the computers, or if possible, Intel vPro or IPMI capable machines, to reboot all machines
    4. Machines will boot and check in to the WDS for re-imaging
    5. Machines will boot and check in to Intune either through the IMI or through normal Internet access and finish imaging
    6. Once the machine completes the InTune tasks, InTune will signal backups to come down to the machines. If these backups are offsite, they can be staged on the IMI through backup software running on a virtual machine located on the IMI appliance to speed up recovery and not impede the Internet connection at the remote site
    7. Pre-stage backups onto local storage, push recovery from the virtual machine on the IMI

Deploy IMI now to recover and avoid the next outage

Get in touch for help choosing the right size IMI deployment for your organization. Nodegrid and ZPE Cloud are the drop-in solution to recovering from the CrowdStrike outage, with plenty of device options to fit any budget and environment size. Contact ZPE Sales now or download the blueprint to help you begin implementing IMI.

The post The CrowdStrike Outage: How to Recover Fast and Avoid the Next Outage appeared first on ZPE Systems.

]]>
Benefits of Edge Computing https://zpesystems.com/benefits-of-edge-computing-zs/ Thu, 18 Jul 2024 19:21:59 +0000 https://zpesystems.com/?p=225361 This blog discusses the five biggest benefits of edge computing, providing examples and additional resources for companies beginning their edge journey.

The post Benefits of Edge Computing appeared first on ZPE Systems.

]]>
An illustration showing various use cases and benefits of edge computing

Edge computing delivers data processing and analysis capabilities to the network’s “edge,” at remote sites like branch offices, warehouses, retail stores, and manufacturing plants. It involves deploying computing resources and lightweight applications very near the devices that generate data, reducing the distance and number of network hops between them. In doing so, edge computing reduces latency and bandwidth costs while mitigating risk, enhancing edge resilience, and enabling real-time insights. This blog discusses the five biggest benefits of edge computing, providing examples and additional resources for companies beginning their edge journey.
.

5 benefits of edge computing​

Edge Computing:

Description

Reduces latency

Leveraging data at the edge reduces network hops and latency to improve speed and performance.

Mitigates risk

Keeping data on-site at distributed edge locations reduces the chances of interception and limits the blast radius of breaches.

Lowers bandwidth costs

Reducing edge data transmissions over expensive MPLS lines helps keep branch costs low.

Enhances edge resilience

Analyzing data on-site ensures that edge operations can continue uninterrupted during ISP outages and natural disasters.

Enables real-time insights

Eliminating off-site processing allows companies to use and extract value from data as soon as it’s generated.

1. Reduces latency

Edge computing leverages data on the same local network as the devices that generate it, cutting down on edge data transmissions over the WAN or Internet. Reducing the number of network hops between devices and applications significantly decreases latency, improving the speed and performance of business intelligence apps, AIOps, equipment health analytics, and other solutions that use edge data.

Some edge applications run on the devices themselves, completely eliminating network hops and facilitating real-time, lag-free analysis. For example, an AI-powered surveillance application installed on an IoT security camera at a walk-up ATM can analyze video feeds in real-time and alert security personnel to suspicious activity as it occurs.​

 

Read more examples of how edge computing improves performance in our guide to the Applications of Edge Computing.

2. Mitigates risk

Edge computing mitigates security and compliance risks by distributing an organization’s sensitive data and reducing off-site transmission. Large, centralized data stores in the cloud or data center are prime targets for cybercriminals because the sheer volume of data involved increases the chances of finding something valuable. Decentralizing data in much smaller edge storage solutions makes it harder for hackers to find the most sensitive information and also limits how much data they can access at one time.

Keeping data at the edge also reduces the chances of interception in transit to cloud or data center storage. Plus, unlike in the cloud, an organization maintains complete control over who and what has access to sensitive data, aiding in compliance with regulations like the GDPR and PCI DSS 4.0.
.

To learn how to protect edge data and computing resources, read Comparing Edge Security Solutions.

3. Lowers bandwidth costs

Many organizations use MPLS (multi-protocol label switching) links to securely connect edge sites to the enterprise network. MPLS bandwidth is much more expensive than regular Internet lines, which makes transmitting edge data to centralized data processing applications extremely costly. Plus, it can take months to provision MPLS at a new site, delaying launches and driving up overhead expenses.

Edge computing significantly reduces MPLS bandwidth utilization by running data-hungry applications on the local network, reserving the WAN for other essential traffic. Combining edge computing with SD-WAN (software-defined wide area networking) and SASE (secure access service edge) technologies can markedly decrease the reliance on MPLS links, allowing organizations to accelerate branch openings and see faster edge ROIs.
.

Learn more about cost-effective edge deployments in our Edge Computing Architecture Guide.

4. Enhances edge resilience

Since edge computing applications run on the same LAN as the devices generating data, they can continue to function even if the site loses Internet access due to an ISP outage, natural disaster, or other adverse event. This also allows uninterrupted edge operations in locations with inconsistent (or no) Internet coverage, like offshore oil rigs, agricultural sites, and health clinics in isolated rural communities. Edge computing ensures that organizations don’t miss any vital health or safety alerts and facilitates technological innovation using AI and other data analytics tools in challenging environments..
.

For more information on operational resilience, read Network Resilience: What is a Resilience System?

5. Enables real-time insights

Sending data from the edge to a cloud or on-premises data lake for processing, transformation, and ingestion by analytics or AI/ML tools takes time, preventing companies from acting on insights at the moment when they’re most useful. Edge computing applications start using data as soon as it’s generated, so organizations can extract value from it right away. For example, a retail store can use edge computing to gain actionable insights on purchasing activity and customer behavior in real-time, so they can move in-demand products to aisle endcaps or staff extra cashiers as needed.
.

To learn more about the potential uses of edge computing technology, read Edge Computing Examples.

Simplify your edge computing deployment with Nodegrid

The best way to achieve the benefits of edge computing described above without increasing management complexity or hardware overhead is to use consolidated, vendor-neutral solutions to host, connect, and secure edge workloads. For example, the Nodegrid Gate SR from ZPE Systems delivers an entire stack of edge networking and infrastructure management technologies in a single, streamlined device. The open, Linux-based Nodegrid OS supports VMs and containers for third-party applications, with an Nvidia Jetson Nano card capable of running AI workloads alongside non-AI data analytics for ultimate efficiency.

Improve your edge computing deployment with Nodegrid

Nodegrid consolidates edge computing deployments to improve operational efficiency without sacrificing performance or functionality. Schedule a free demo to see Nodegrid in action.

Schedule a Demo

The post Benefits of Edge Computing appeared first on ZPE Systems.

]]>
Improving Your Zero Trust Security Posture https://zpesystems.com/zero-trust-security-posture-zs/ Tue, 16 Jul 2024 19:51:31 +0000 https://zpesystems.com/?p=225257 This blog provides advice for improving your zero trust security posture with a multi-layered strategy that mitigates weaknesses for complete coverage.

The post Improving Your Zero Trust Security Posture appeared first on ZPE Systems.

]]>
Zero Trust for the Edge(1)

The current cyber threat landscape is daunting, with attacks occurring so frequently that security experts recommend operating under the assumption that your network is already breached. Major cyber attacks – and the disruptions they cause – frequently make news headlines. For example, the recent LendingTree breach exposed consumer data, which could affect the company’s reputation and compliance status. An attack on auto dealership software company CDK Global took down the platform and disrupted business for approximately 15,000 car sellers – an outage that’s still ongoing as of this article’s writing.

The zero trust security methodology outlines the best practices for limiting the blast radius of a successful breach by preventing malicious actors from moving laterally through the network and accessing the most valuable or sensitive resources. Many organizations have already begun their zero trust journey by implementing role-based access controls (RBAC), multi-factor authentication (MFA), and other security solutions, but still struggle with coverage gaps that result in ransomware attacks and other disruptive breaches. This blog provides advice for improving your zero trust security posture with a multi-layered strategy that mitigates weaknesses for complete coverage.

How to improve your zero trust security posture

.

Strategy

Description

Gain a full understanding of your protect surface

Use automated discovery tools to identify all the data, assets, applications, and services that an attacker could potentially target.

Micro-segment your network with micro-perimeters

Implement specific policies, controls, and trust verification mechanisms to mitigate and protect surface vulnerabilities.

Isolate and defend your management infrastructure

Use OOB management and hardware security to prevent attackers from compromising the control plane.

Defend your cloud resources

Understand the shared responsibility model and use cloud-specific tools like a CASB to prevent shadow IT and enforce zero trust.

Extend zero trust to the edge

Use edge-centric solutions like SASE to extend zero trust policies and controls to remote network traffic, devices, and users.

Gain a full understanding of your protect surface

Many security strategies focus on defending the network’s “attack surface,” or all the potential vulnerabilities an attacker could exploit to breach the network. However, zero trust is all about defending the “protect surface,” or all the data, assets, applications, and services that an attacker could potentially try to access. The key difference is that zero trust doesn’t ask you to try to cover any possible weakness in a network, which is essentially impossible. Instead, it wants you to look at the resources themselves to determine what has the most value to an attacker, and then implement security controls that are tailored accordingly.

Gaining a full understanding of all the resources on your network can be extraordinarily challenging, especially with the proliferation of SaaS apps, mobile devices, and remote workforces. There are automated tools that can help IT teams discover all the data, apps, and devices on the network. Application discovery and dependency mapping (ADDM) tools help identify all on-premises software and third-party dependencies; cloud application discovery tools do the same for cloud-hosted apps by monitoring network traffic to cloud domains. Sensitive data discovery tools scan all known on-premises or cloud-based resources for personally identifiable information (PII) and other confidential data, and there are various device management solutions to detect network-connected hardware, including IoT devices.
,

  • Tip: This step can’t be completed one time and then forgotten – teams should execute discovery processes on a regular, scheduled basis to limit gaps in protection. 

Micro-segment your network with micro-perimeters

Micro-segmentation is a cornerstone of zero-trust networks. It involves logically separating all the data, applications, assets, and services according to attack value, access needs, and interdependencies. Then, teams implement granular security policies and controls tailored to the needs of each segment, establishing what are known as micro-perimeters. Rather than trying to account for every potential vulnerability with one large security perimeter, teams can just focus on the tools and policies needed to cover the specific vulnerabilities of a particular micro-segment.

Network micro-perimeters help improve your zero trust security posture with:

  • Granular access policies granting the least amount of privileges needed for any given workflow. Limiting the number of accounts with access to any given resource, and limiting the number of privileges granted to any given account, significantly reduces the amount of damage a compromised account (or malicious actor) is capable of inflicting.
  • Targeted security controls addressing the specific risks and vulnerabilities of the resources in a micro-segment. For example, financial systems need stronger encryption, strict data governance monitoring, and multiple methods of trust verification, whereas an IoT lighting system requires simple monitoring and patch management, so the security controls for these micro-segments should be different.
  • Trust verification using context-aware policies to catch accounts exhibiting suspicious behavior and prevent them from accessing sensitive resources. If a malicious outsider compromises an authorized user account and MFA device – or a disgruntled employee uses their network privileges to harm the company – it can be nearly impossible to prevent data exposure. Context-aware policies can stop a user from accessing confidential resources outside of typical operating hours, or from unfamiliar IP addresses, for example. Additionally, user entity and behavior analytics (UEBA) solutions use machine learning to detect other abnormal and risky behaviors that could indicate malicious intent.

Isolate and defend your management infrastructure

For zero trust to be effective, organizations must apply consistently strict security policies and controls to every component of their network architecture, including the management interfaces used to control infrastructure. Otherwise, a malicious actor could use a compromised sysadmin account to hijack the control plane and bring down the entire network.

According to a recent CISA directive, the best practice is to isolate the network’s control plane so that management interfaces are inaccessible from the production network. Many new cybersecurity regulations, including PCI DSS 4.0, DORA, NIS2, and the CER Directive, also either strongly recommend or require management infrastructure isolation.

Isolated management infrastructure (IMI) prevents compromised accounts, ransomware, and other threats from moving laterally to or from the production LAN. It gives teams a safe environment to recover from ransomware or other cyberattacks without risking reinfection, which is known as an isolated recovery environment (IRE). Management interfaces and the IRE should also be protected by granular, role-based access policies, multi-factor authentication, and strong hardware roots of trust to further mitigate risk.

A diagram showing how to use Nodegrid Gen 3 OOB to enable IMI.The easiest and most secure way to implement IMI is with Gen 3 out-of-band (OOB) serial console servers, like the Nodegrid solution from ZPE Systems. These devices use alternative network interfaces like 5G/4G LTE cellular to ensure complete isolation and 24/7 management access even during outages. They’re protected by hardware security features like TPM 2.0 and GPS geofencing, and they integrate with zero trust solutions like identity and access management (IAM) and UEBA to enable consistent policy enforcement.

Defend your cloud resources

The vast majority of companies host some or all of their workflows in the cloud, which significantly expands and complicates the attack surface while making it more challenging to identify and defend the protect surface. Some organizations also lack a complete understanding of the shared responsibility model for varying cloud services, increasing the chances of coverage gaps. Additionally, many orgs struggle with “shadow IT,” which occurs when individual business units implement cloud applications without going through onboarding, preventing security teams from applying zero trust controls.

The first step toward improving your zero trust security posture in the cloud is to ensure you understand where your cloud service provider’s responsibilities end and yours begin. For instance, most SaaS providers handle all aspects of security except IAM and data protection, whereas IaaS (Infrastructure-as-a-Service) providers are only responsible for protecting their physical and virtual infrastructure.

It’s also vital that security teams have a complete picture of all the cloud services in use by the organization and a way to deploy and enforce zero trust policies in the cloud. For example, a cloud access security broker (CASB) is a solution that discovers all the cloud services in use by an organization and allows teams to monitor and manage security for the entire cloud architecture. A CASB provides capabilities like data governance, malware detection, and adaptive access controls, so organizations can protect their cloud resources with the same techniques used in the on-premises environment.
.

Example Cloud Access Security Broker Capabilities

Visibility

Compliance

Threat protection

Data security

Cloud service discovery

Monitoring and reporting

User authentication and authorization

Data governance and loss prevention

Malware (e.g., virus, ransomware) detection

User and entity behavior analytics (UEBA)

Data encryption and  tokenization

Data leak prevention

Extend zero trust to the edge

Modern enterprise networks are highly decentralized, with many business operations taking place at remote branches, Internet of Things (IoT) deployment sites, and end-users’ homes. Extending security controls to the edge with on-premises zero trust solutions is very difficult without backhauling all remote traffic through a centralized firewall, which creates bottlenecks that affect performance and reliability. Luckily, the market for edge security solutions is rapidly growing and evolving to help organizations overcome these challenges. 

Security Access Service Edge (SASE) is a type of security platform that delivers core capabilities as a managed, typically cloud-based service for the edge. SASE uses software-defined wide area networking (SD-WAN) to intelligently and securely route edge traffic through the SASE tech stack, allowing the application and enforcement of zero trust controls. In addition to CASB and next-generation firewall (NGFW) features, SASE usually includes zero trust network access (ZTNA), which offers VPN-like functionality to connect remote users to enterprise resources from outside the network. ZTNA is more secure than a VPN because it only grants access to one app at a time, requiring separate authorization requests and trust verification attempts to move to different resources. 

Accelerating the zero trust journey

Zero trust is not a single security solution that you can implement once and forget about – it requires constant analysis of your security posture to identify and defend weaknesses as they arise. The best way to ensure adaptability is by using vendor-agnostic platforms to host and orchestrate zero trust security. This will allow you to add and change security services as needed without worrying about interoperability issues.

For example, the Nodegrid platform from ZPE Systems includes vendor-neutral serial consoles and integrated branch services routers that can host third-party software such as SASE and NGFWs. These devices also provide Gen 3 out-of-band management for infrastructure isolation and network resilience. Nodegrid protects management interfaces with strong hardware roots-of-trust, embedded firewalls, SAML 2.0 integrations, and other zero trust security features. Plus, with Nodegrid’s cloud-based or on-premises management platform, teams can orchestrate networking, infrastructure, and security workflows across the entire enterprise architecture.

 

Improve your zero trust security posture with Nodegrid

Using Nodegrid as the foundation for your zero trust network infrastructure ensures maximum agility while reducing management complexity. Watch a Nodegrid demo to learn more.

Schedule a Demo

The post Improving Your Zero Trust Security Posture appeared first on ZPE Systems.

]]>
Edge Computing vs Cloud Computing https://zpesystems.com/edge-computing-vs-cloud-computing-zs/ Wed, 12 Jun 2024 14:00:07 +0000 https://zpesystems.com/?p=41296 This guide compares edge computing vs cloud computing to help organizations choose the right deployment model for their use case.

The post Edge Computing vs Cloud Computing appeared first on ZPE Systems.

]]>
A factory floor with digital overlays showing edge computing data analysis dashboards

Both edge computing and cloud computing involve moving computational resources – such as CPUs (central processing units), GPUs (graphics processing units), RAM (random access memory), and data storage – out of the centralized, on-premises data center. As such, both represent massive shifts in enterprise network designs and how companies deploy, manage, secure, and use computing resources. Edge and cloud computing also create new opportunities for data processing, which is sorely needed as companies generate more data than ever before, thanks in no small part to an explosion in Internet of Things (IoT) and artificial intelligence (AI) adoption. By 2025, IoT devices alone are predicted to generate 80 zettabytes of data, much of it decentralized around the edges of the network. AI, machine learning, and other data analytics applications, meanwhile, require vast quantities of data (and highly scalable infrastructure) to provide accurate insights. This guide compares edge computing vs cloud computing to help organizations choose the right deployment model for their use case.

 Table of Contents

Defining edge computing vs cloud computing

Edge computing involves deploying computing capabilities to the network’s edges to enable on-site data processing for Internet of Things (IoT) sensors, operational technology (OT), automated infrastructure, and other edge devices and services. Edge computing deployments are highly distributed across remote sites far from the network core, such as oil & gas rigs, automated manufacturing plants, and shipping warehouses. Ideally, organizations use a centralized (usually cloud-based) orchestrator to oversee and conduct operations across the distributed edge computing architecture.

Diagram showing an example edge computing architecture controlled by a cloud-based edge orchestrator.

Reducing the number of network hops between edge devices and the applications that process and use edge data enables real-time data processing, reduces MPLS bandwidth costs, improves performance, and keeps private data within the security micro-perimeter. Cloud computing involves using remote computing resources over the Internet to run applications, process and store data, and more. Cloud service providers manage the physical infrastructure and allow companies to easily scale their virtual computing resources with the click of a button, significantly reducing operational costs and complexity over on-premises and edge computing deployments.

Examples of edge computing vs cloud computing

Edge computing works best for workloads requiring real-time data processing using fairly lightweight applications, especially in locations with inconsistent or unreliable Internet access or where privacy/compliance is a major concern. Example edge computing use cases include:

Cloud computing is well-suited to workloads requiring extensive computational resources that can scale on-demand, but that aren’t time-sensitive. Example use cases include:

The advantages of edge computing over cloud computing

Using cloud-based applications to process edge device data involves transmitting that data from the network’s edges to the cloud provider’s data center, and vice versa. Transmitting data over the open Internet is too risky, so most organizations route the traffic through a security appliance such as a firewall to encrypt and protect the data. Often these security solutions are off-site, in the company’s central data center, or, best-case scenario, a SASE point-of-presence (PoP), adding more network hops between edge devices and the cloud applications that service them.  This process increases bandwidth usage and introduces latency, preventing real-time data processing and negatively affecting performance.

Edge computing moves data processing resources closer to the source, eliminating the need to transmit this data over the Internet. This improves performance by reducing (or even removing) network hops and preventing network bottlenecks at the centralized firewall. Edge computing also lets companies use their valuable edge data in real time, enabling faster insights and greater operational efficiencies.

Edge computing mitigates the risk involved in storing and processing sensitive or highly regulated data in a third-party computing environment, giving companies complete control over their data infrastructure. It can also help reduce bandwidth costs by eliminating the need to route edge data through VPNs or MPLS links to apply security controls.

Edge computing advantages:

  • Improves network and application performance
  • Enables real-time data processing and insights
  • Simplifies security and compliance
  • Reduces MPLS bandwidth costs

The disadvantages of edge computing compared to cloud computing

Cloud computing resources are highly scalable, allowing organizations to meet rapidly changing requirements without the hassle of purchasing, installing, and maintaining additional hardware and software licenses. Edge computing still involves physical, on-premises infrastructure, making it far less scalable than the cloud. However, it’s possible to improve edge agility and flexibility by using vendor-neutral platforms to run and manage edge resources. An open platform like Nodegrid allows teams to run multiple edge computing applications from different vendors on the same box, swap out services as business needs evolve, and deploy automation to streamline multi-vendor edge device provisioning from a single orchestrator. A diagram showing how the Nodegrid Mini SR combines edge computing and networking capabilities on a small, affordable, flexible platform.

Diagram showing how the Nodegrid Mini SR combines edge computing and networking capabilities on a small, affordable, flexible platform.

Organizations often deploy edge computing in less-than-ideal operating environments, such as closets and other cramped spaces that lack the strict HVAC controls that maintain temperature and humidity in cloud data centers. These environments also typically lack the physical security controls that prevent unauthorized individuals from tampering with equipment, such as guarded entryways, security cameras, and biometric locks. The best way to mitigate this disadvantage is with an environmental monitoring system that uses sensors to detect temperature and humidity changes that could cause equipment failures as well as proximity alarms to notify administrators when someone gets too close. It’s also advisable to use hermetically sealed edge computing devices capable of operating in extreme temperatures and with built-in security features making them tamper-proof.

Cloud computing is often more resilient than edge computing because cloud service providers must maintain a certain level of continuous uptime to meet service level agreements (SLAs). Edge computing operations could be disrupted by network equipment failures, ISP outages, ransomware attacks, and other adverse events, so it’s essential to implement resilience measures that keep services running (if in a degraded state) and allow remote teams to fix problems without having to be on site. Edge resilience measures include Gen 3 out-of-band management, control plane/data plane separation (also known as isolated management infrastructure or IMI), and isolated recovery environments (IRE).

Edge computing disadvantages:

  • Less scalable than cloud infrastructure
  • Lack of environmental and security controls
  • Requires additional resilience measures

Edge-native applications vs cloud-native applications

Edge-native applications and cloud-native applications are similar in that they use containers and microservices architectures, as well as CI/CD (continuous integration/continuous delivery) and other DevOps principles.

Cloud-native applications leverage centralized, scalable resources to perform deep analysis of long-lived data in long-term hot storage environments. Edge-native applications are built to leverage limited resources distributed around the network’s edges to perform real-time analysis of ephemeral data that’s constantly moving. Typically, edge-native applications are highly contextualized for a specific use case, whereas cloud-native applications offer broader, standardized capabilities. Another defining characteristic of edge-native applications is the ability to operate independently when needed while still integrating seamlessly with the cloud, upstream resources, remote management, and centralized orchestration.

Choosing edge computing vs cloud computing

Both edge computing and cloud computing have unique advantages and disadvantages that make them well-suited for different workloads and use cases. Factors like increasing data privacy regulations, newsworthy cloud provider outages, greater reliance on human-free IoT and OT deployments, and an overall trend toward decentralizing business operations are pushing organizations to adopt edge computing. However, most companies still rely heavily on cloud resources and will continue to do so, making it crucial to ensure seamless interoperability between the edge and the cloud.

The best way to ensure integration is by using vendor-neutral platforms. For example, Nodegrid integrated services routers like the Gate SR provide multi-vendor out-of-band serial console management for edge infrastructure and devices, using an embedded Jetson Nano card to support edge computing and AI workloads. The ZPE Cloud management platform unifies orchestration for the entire Nodegrid-connected architecture, delivering 360-degree control over complex and highly distributed networks. Plus, Nodegrid easily integrates – or even directly hosts – other vendors’ solutions for edge data processing, IT automation, SASE, and more, making edge operations more cost-effective. Nodegrid also provides the complete control plane/data plane separation needed to ensure edge resilience.

Get edge efficiency and resilience with Nodegrid

The Nodegrid platform from ZPE Systems helps companies across all industries streamline their edge operations with resilient, vendor-neutral, Gen 3 out-of-band management. Request a free Nodegrid demo to learn more. REQUEST A DEMO

The post Edge Computing vs Cloud Computing appeared first on ZPE Systems.

]]>
Edge Computing Architecture Guide https://zpesystems.com/edge-computing-architecture-zs/ Thu, 06 Jun 2024 15:30:09 +0000 https://zpesystems.com/?p=41172 This edge computing architecture guide provides information and resources needed to ensure a streamlined, resilient, and cost-effective deployment.

The post Edge Computing Architecture Guide appeared first on ZPE Systems.

]]>
Edge-computing-architecture-concept-icons-arranged-around-the-word-edge-computing
Edge computing is rapidly gaining popularity as more  organizations see the benefits of decentralizing data processing for Internet of Things (IoT) deployments, machine learning applications, operational technology (OT), AI and machine learning, and other edge use cases. This guide defines edge computing and edge-native applications, highlights a few key use cases, describes the typical components of an edge deployment, and provides additional resources for building your own edge computing architecture.

Table of Contents

What is edge computing?

The Open Glossary of Edge Computing defines it as deploying computing capabilities to the edges of a network to improve performance, reduce operating costs, and increase resilience. Edge computing reduces the number of network hops between data-generating devices and the applications that process and use that data, mitigating latency, bandwidth, and security concerns compared to cloud or on-premises computing.

A diagram showing the migration path from on-premises computing to edge computing, along with the associated level of security risk.

Image: A diagram showing the migration path from on-premises computing to edge computing, along with the associated level of security risk.

Edge-native applications

Edge-native applications are built from the ground up to harness edge computing’s unique capabilities while mitigating the limitations. They leverage some cloud-native principles, such as containers, microservices, and CI/CD (continuous integration/continuous delivery), with several key differences.

Edge-Native vs. Cloud-Native Applications

Edge-Native Cloud-Native
Topology Distributed Centralized
Compute Real-time processing with limited resources Deep processing with scalable resources
Data Constantly changing and moving Long-lived and at rest in a centralized location
Capabilities Contextualized Standardized
Location Anywhere Cloud data center

Source: Gartner

Edge-native applications integrate seamlessly with the cloud, upstream resources, remote management, and centralized orchestration, but can also operate independently as needed. Crucially, they allow organizations to actually leverage their edge data in real-time, rather than just collecting it for later processing.

Edge computing use cases

Nearly every industry has potential use cases for edge computing, including:

Industry Edge Computing Use Cases
Healthcare
  • Mitigating security, privacy, and HIPAA compliance concerns with local data processing
  • Improving patient health outcomes with real-time alerts that don’t require Internet access
  • Enabling emergency mobile medical intervention while reducing mistakes
Finance
  • Reducing security and regulatory risks through local computing and edge infrastructure isolation
  • Getting fast, localized business insights to improve revenue and customer service
  • Deploying AI-powered surveillance and security solutions without network bottlenecks
Energy
  • Enabling network access and real-time data processing for airgapped and isolated environments
  • Improving efficiency with predictive maintenance recommendations and other insights
  • Proactively identifying and remediating safety, quality, and compliance issues
Manufacturing
  • Getting real-time, data-driven insights to improve manufacturing efficiency and product quality
  • Reducing the risk of confidential production data falling into the wrong hands in transit
  • Ensuring continuous operations during network outages and other adverse events
  • Using AI with computer vision to ensure worker safety and quality control of fabricated components/products
Utilities/Public Services
  • Using IoT technology to deliver better services, improve public safety, and keep communities connected
  • Reducing the fleet management challenges involved in difficult deployment environments
  • Aiding in disaster recovery and resilience with distributed redundant edge resources

To learn more about the specific benefits and uses of edge computing for each industry, read Distributed Edge Computing Use Cases.

Edge computing architecture design

An edge computing architecture consists of six major components:

Edge Computing Components Description Best Practices
Devices generating edge data IoT devices, sensors, controllers, smartphones, and other devices that generate data at the edge Use automated patch management to keep devices up-to-date and protect against known vulnerabilities
Edge software applications Analytics, machine learning, and other software deployed at the edge to use edge data Look for edge-native applications that easily integrate with other tools to prevent edge sprawl
Edge computing infrastructure CPUs, GPUs, memory, and storage used to process data and run edge applications Use vendor-neutral, multi-purpose hardware to reduce overhead and management complexity
Edge network infrastructure and logic Wired and wireless connectivity, routing, switching, and other network functions Deploy virtualized network functions and edge computing on common, vendor-neutral hardware
Edge security perimeter Firewalls, endpoint security, web filtering, and other enterprise security functionality Implement edge-centric security solutions like SASE and SSE to prevent network bottlenecks while protecting edge data
Centralized management and orchestration An EMO (edge management and orchestration) platform used to oversee and conduct all edge operations Use a cloud-based, Gen 3 out-of-band (OOB) management platform to ensure edge resilience and enable end-to-end automation

Click here to learn more about the infrastructure, networking, management, and security components of an edge computing architecture.

How to build an edge computing architecture with Nodegrid

Nodegrid is a Gen 3 out-of-band management platform that streamlines edge computing with vendor-neutral solutions and a centralized, cloud-based orchestrator.

A diagram showing all the edge computing and networking capabilities provided by the Nodegrid Gate SR

Image: A diagram showing all the edge computing and networking capabilities provided by the Nodegrid Gate SR.

Nodegrid integrated services routers deliver all-in-one edge computing and networking functionality while taking up 1RU or less. A Nodegrid box like the Gate SR provides Ethernet and Serial switching, serial console/jumpbox management, WAN routing, wireless networking, and 5G/4G cellular for network failover or out-of-band management. It includes enough CPU, memory, and encrypted SSD storage to run edge computing workflows, and the x86-64bit Linux-based Nodegrid OS supports virtualized network functions, VMs, and containers for edge-native applications, even those from other vendors. The new Gate SR also comes with an embedded NVIDIA Jetson Orin NanoTM module featuring dual CPUs for EMO of AI workloads and infrastructure isolation.

Nodegrid SRs can also host SASE, SSE, and other security solutions, as well as third-party automation from top vendors like Redhat and Salt. Remote teams use the centralized, vendor-neutral ZPE Cloud platform (an on-premises version is available) to deploy, monitor, and orchestrate the entire edge architecture. Management, automation, and orchestration workflows occur over the Gen 3 OOB control plane, which is separated and isolated from the production network. Nodegrid OOB uses fast, reliable network interfaces like 5G cellular to enable end-to-end automation and ensure 24/7 remote access even during major outages, significantly improving edge resilience.

Streamline your edge deployment

The Nodegrid platform from ZPE Systems reduces the cost and complexity of building an edge computing architecture with vendor-neutral, all-in-one devices and centralized EMO. Request a free Nodegrid demo to learn more.

Click here to learn more!

The post Edge Computing Architecture Guide appeared first on ZPE Systems.

]]>
Critical Entities Resilience Directive https://zpesystems.com/critical-entities-resilience-directive-zs/ Wed, 05 Jun 2024 20:25:06 +0000 https://zpesystems.com/?p=41152 With limited time to demonstrate compliance with the Critical Entities Resilience Directive, organizations should begin preparing now.

The post Critical Entities Resilience Directive appeared first on ZPE Systems.

]]>
Critical Entities Resilience Directive
The Critical Entities Resilience (CER) Directive is an EU regulation designed to prevent disruption to the services considered essential to society or the economy. The CER Directive outlines the obligations of critical entities to prepare for any potential hazard, including natural disasters, human errors, terrorist attacks, and cybersecurity breaches. EU Member States have until 17 October 2024 to adopt and publish resilience measures required for their critical entities, and those measures officially take effect from 18 October 2024. Member States must identify and notify critical entities by July 2026; these entities then only have ten months to comply with CER requirements. With such a tight timeframe to demonstrate compliance with the Critical Entities Resilience Directive, organizations that might be deemed critical should begin preparing their resilience strategies now.

Citation: Directive (EU) 2022/2557 of the European Parliament and of the Council of 14 December 2022 on the resilience of critical entities and repealing Council Directive 2008/114/EC

Who does the Critical Entities Resilience Directive apply to, and why does it matter?

The CER Directive covers eleven sectors and subsectors that provide services essential to society, the economy, public health & safety, or preserving the environment. These include:

In-Scope Sectors Covered by the CER Directive

Sector Subsectors
Energy
  • Electricity
  • Heating and cooling
  • Oil & gas
  • Hydrogen
Transport
  • Air
  • Rail
  • Water
  • Road
  • Public transportation
Banking
  • Deposit, lending, and credit institutions
Financial Market Infrastructure
  • Trading venues
  • Clearing systems
Health
Drinking Water
  • Drinking water suppliers
  • Drinking water distributors
Waste Water
  • Collection
  • Treatment
  • Disposal
Digital Infrastructure
Public Administration
Space
  • Operators of ground-based infrastructure for space-based services
Food Production, Processing, and Distribution
  • Large-scale industrial food production and processing
  • Food supply chain services
  • Food wholesale distributors

The Critical Entities Resilience Directive is one of several new EU regulations (such as DORA and NIS2) created to establish consistent guidelines for resilience in sectors where any service disruption has a significant negative impact on society or the economy. Whereas DORA applies primarily to financial institutions and supporting services, and NIS2 focuses on cybersecurity threats, the CER Directive is broader in scope and addresses other, non-digital threats to resilience such as natural disasters and global health crises (e.g., COVID-19).

The penalties for noncompliance will vary by Member State but are likely to include fines, public notification, remediation, and withdrawal of authorization.

CER Directive requirements for critical entities

Most of the CER Directive requirements apply to Member States, outlining how the designated authorities will adopt and enforce resilience measures and support critical entities in achieving compliance. However, there are five key provisions that relevant organizations should be aware of as they prepare for their identification as critical entities.

1. Article 4: Strategy on the resilience of critical entities

EU Member States have until 17 January 2026 to adopt a strategy outlining the guidelines and procedures for critical entities to achieve and maintain a high level of resilience. Essentially, this strategy will describe the requirements for CER Directive compliance in each Member State and provide guidance on how to meet those requirements. Potentially critical entities can prepare by examining existing resilience frameworks and regulations to anticipate the policies, tools, and procedures that will likely be required.

2. Article 5: Risk assessment by Member States

Member States have until 17 January 2026 to perform a risk assessment of all essential services. These assessments must account for natural and human-made risks, including accidents, natural disasters, public health emergencies, terrorist attacks, and antagonistic threats. Member States will then use the risk assessments to identify critical entities within each sector.

3. Article 12: Risk assessment by critical entities

Critical entities must perform risk assessments using similar criteria to Article 5 within nine months of being notified of their designation as critical and at least every four years afterward. If an organization already conducts risk assessments according to other similar resilience guidelines or frameworks, Member States have the authority to decide whether or not those assessments meet CER Directive compliance requirements.

4. Article 13: Resilience measures of critical entities

Critical entities must take the appropriate technical, security, and policy measures to ensure resilience, including a comprehensive strategy for service continuity and disaster recovery. Examples of resilience measures outlined by the CER Directive include:

CER Directive Resilience Measures

Requirements Examples
Adopt disaster risk reduction and climate adaptation measures Using an environmental monitoring system to detect and respond to rising temperatures, humidity, and other relevant conditions
Ensure adequate physical protection of the premises and critical infrastructure, including fencing, barriers, perimeter monitoring tools, detection equipment, and access controls Installing proximity sensors in data center racks to automatically notify security teams if an unauthorized user physically tampers with remote infrastructure
Respond to, resist, and mitigate service disruptions Deploying out-of-band (OOB) serial consoles with cellular capabilities to ensure continuous remote management access to critical infrastructure
Recover from incidents using business continuity measures to resume provisioning essential services Building a resilience system containing all the infrastructure and tools needed to rebuild and recover while still delivering core services
Manage employee security by classifying personnel who exercise critical functions, establishing access rights and controls, and performing background checks as needed Adopting zero-trust security policies and controls that assign access privileges according to role (role-based access control, or RBAC)

5. Article 15: Incident notification

Critical entities must notify the competent authority of any incidents that have or could significantly disrupt essential services within 24 hours of detection. The significance of a disruption is determined according to the following parameters:

  • How many users the disruption affects;
  • How long the disruption lasts;
  • The geographical area the disruption affects.

The incident notification must explain the nature, cause, and potential consequences of the disruption, including any cross-border implications.

How Nodegrid simplifies CER Directive compliance

Nodegrid is a Gen 3 out-of-band management platform that makes the perfect foundation for a resilience system. Nodegrid OOB separates the control plane from the data plane to ensure continuous remote management access to critical infrastructure even during production network outages. Vendor-neutral serial consoles and integrated branch service routers directly host third-party software for security, automation, recovery, and more, reducing hardware overhead at each site while ensuring teams have access to all the tools they need to restore essential services.

Looking to Upgrade to a Nodegrid serial console?

Prepare for the Critical Entities Resilience Directive by replacing your discontinued, EOL serial console with a Gen 3 out-of-band solution from Nodegrid.

Click here to learn more!

The post Critical Entities Resilience Directive appeared first on ZPE Systems.

]]>
PCI DSS 4.0 Requirements https://zpesystems.com/pci-dss-4-point-0-requirements-zs/ Wed, 15 May 2024 14:00:17 +0000 https://zpesystems.com/?p=40853 This guide summarizes all twelve PCI DSS 4.0 requirements across six categories and describes the best practices for maintaining compliance.

The post PCI DSS 4.0 Requirements appeared first on ZPE Systems.

]]>
Businessman,Using,Virtual,Touch,Screen,Clicks,Abbreviation:,Pci,Dss.,Concept
The Security Standards Council (SSC) of the Payment Card Industry (PCI) released the version 4.0 update of the Data Security Standard (DSS) in March 2022. PCI DSS 4.0 applies to any organization in any country that accepts, handles, stores, or transmits cardholder data. This standard defines cardholder data as any personally identifiable information (PII) associated with someone’s credit or debit card. The risks for PCI DSS 4.0 noncompliance include fines, reputational damage, and potentially lost business, so organizations must stay up to date with all recent changes.

The new requirements cover everything from protecting cardholder data to implementing user access controls, zero trust security measures, and frequent penetration (pen) testing. Each major requirement defined in the updated PCI DSS 4.0 is summarized below, with tables breaking down the specific compliance stipulations and providing tips or best practices for meeting them.

Citation: The PCI DSS v4.0

PCI DSS 4.0 requirements and best practices

Every PCI DSS 4.0 requirement starts with a stipulation that the processes and mechanisms for implementation are clearly defined and understood. The best practice involves updating policy and process documents as soon as possible after changes occur, such as when business goals or technologies evolve, and communicating changes across all relevant business units.

Jump to the other requirements below:

Build and maintain a secure network and systems

Requirement 1: Install and maintain network security controls

Network security controls include firewalls and other security solutions that inspect and control network traffic. PCI DSS 4.0 requires organizations to install and properly configure network security controls to protect payment card data.

Stipulations for Compliance

Best Practices

Network security controls (NSCs) are configured and maintained.

Validate network security configurations before deployment and use configuration management to track changes and prevent configuration drift.

Network access to and from the cardholder data environment (CDE) is restricted.

Monitor all inbound traffic to the CDE, even from trusted networks, and, when possible, use explicit “deny all” firewall rules to prevent accidental gaps.

Network connections between trusted and untrusted networks are controlled.

Implement a DMZ that manages connections between untrusted networks and public-facing resources on the trusted network.

Risks to the CDE from computing devices that can connect to both untrusted networks and the CDE are mitigated.

Use security controls like endpoint protection and firewalls to protect devices from Internet-based attacks and zero-trust and network segmentation to prevent lateral movement to CDEs.

Requirement 2: Apply secure configurations to all system components

Attackers often compromise systems using known default passwords or old, forgotten services. PCI DSS 4.0 requires organizations to properly configure system security settings and reduce the attack surface by turning off unnecessary software, services, and accounts.

Stipulations for Compliance

Best Practices

System components are configured and managed securely.

Continuously check for vendor-default user accounts and security configurations and ensure all administrative access is encrypted using strong cryptographic protocols.

Wireless environments are configured and managed securely.

Apply the same security standards consistently across wired and wireless environments, and change wireless encryption keys whenever someone leaves the organization.

Protect account data

Requirement 3: Protect stored account data

Any payment account data an organization stores must be protected by methods such as encryption and hashing. Organizations should also limit account data storage unless it’s necessary and, when possible, truncate cardholder data.

Stipulations for Compliance

Best Practices

Storage of account data is kept to a minimum.

Use data retention and disposal policies to configure an automated, programmatic procedure to locate and remove unnecessary account data.

Sensitive authentication data (SAD) is not stored after authorization.

Review data sources to ensure that the full contents of any track, card verification code, and PIN/PIN blocks are not retained after the authorization process is completed.

Access to displays of full primary account number (PAN) and ability to copy cardholder data are restricted.

Use role-based access control (RBAC) to limit PAN access to individuals with a defined need and use the masking approach to display only the number of digits needed for a specific function.

PAN is secured wherever it is stored.

Render PAN unreadable using one-way hashing with a randomly generated secret key, truncation, index tokens, and strong cryptography with secure key management.

Cryptographic keys used to protect stored account data are secured.

Manage cryptographic keys with a centralized key management system that’s PCI DSS 4.0 compliant to restrict access to key-encrypting keys and store them separately from data-encrypting keys.

Where cryptography is used to protect stored account data, key management processes and procedures covering all aspects of the key lifecycle are defined and implemented.

Use a key management solution that simplifies or automates key replacement for old or compromised keys.

Requirement 4: Protect cardholder data with strong cryptography during transmission over open, public networks

While requirement 3 applies to stored card data, requirement 4 outlines stipulations for protecting cardholder data in transit.

Stipulations for Compliance

Best Practices

PAN is protected with strong cryptography during transmission.

Encrypt PAN over both public and internal networks and apply strong cryptography at both the data level and the session level.

Maintain a vulnerability management program

Requirement 5: Protect all systems and networks from malicious software

Organizations must take steps to prevent malicious software (a.k.a., malware) from infecting the network and potentially exposing cardholder data.

Stipulations for Compliance

Best Practices

Malware is prevented, or detected and addressed.

Use a combination of network-based controls, host-based controls, and endpoint security solutions; supplement signature-based tools with AI/ML-powered detection.

Anti-malware mechanisms and processes are active, maintained, and monitored.

Update tools and signature databases as soon as possible and prevent end-users from disabling or altering anti-malware controls.

Anti-phishing mechanisms protect users against phishing attacks.

Use a combination of anti-phishing approaches, including anti-spoofing controls, link scrubbers, and server-side anti-malware.

Requirement 6: Develop and maintain secure systems and software

Development teams should follow PCI-compliant processes when writing and validating code. Additionally, install all appropriate security patches immediately to prevent malicious actors from exploiting known vulnerabilities in systems and software.

Stipulations for Compliance

Best Practices

Bespoke and custom software are developed securely.

Use manual or automatic code reviews to search for undocumented features, validate that third-party libraries are used securely, analyze insecure code structures, and check for logical vulnerabilities.

Security vulnerabilities are identified and addressed.

Use a centralized patch management solution to automatically notify teams of known vulnerabilities and pending updates.

Public-facing web applications are protected against attacks.

Use automatic vulnerability security assessment tools that include specialized web scanners that analyze web application protection.

Changes to all system components are managed securely.

Use a centralized source code version management solution to track, approve, and roll back changes.

Implement strong access control measures

Requirement 7: Restrict access to system components and cardholder data by business need-to-know

This PCI DSS 4.0 requirement aims to limit who and what has access to sensitive cardholder data and CDEs to prevent malicious actors from gaining access through a compromised, over-provisioned account. “Need to know” means that only accounts with a specific need should have access to sensitive resources; it’s often applied using the “least-privilege” approach, which means only granting accounts the specific privileges needed to perform a job role.

Stipulations for Compliance

Best Practices

Access to system components and data is appropriately defined and assigned.

Use RBAC to provide accounts with access privileges based on their job functions (e.g., ‘customer service agent’ or ‘warehouse manager’) rather than on an individual basis.

Access to system components and data is managed via an access control system.

Use a centralized identity and access management (IAM) system to manage access across the enterprise, including branches, edge computing sites, and the cloud.

Requirement 8: Identify users and authenticate access to system components

Organizations must establish and prove the identity of any users attempting to access CDEs or sensitive data. This requirement is core to the zero trust security methodology which is designed to limit the scope of data access and theft once an attacker has already compromised an account or system.

Stipulations for Compliance

Best Practices

User identification and related accounts for users and administrators are strictly managed throughout an account’s lifecycle.

Use an account lifecycle management solution to streamline account discovery, provisioning, monitoring, and deactivation.

Strong authentication for users and administrators is established and managed.

Replace relatively weak passwords/passphrases with stronger authentication factors like hardware tokens or biometrics.

Multi-factor authentication (MFA) is implemented to secure access into the CDE.

MFA should also protect access to management interfaces on isolated management infrastructure (IMI) to prevent attackers from controlling the CDE.

MFA systems are configured to prevent misuse.

Secure the MFA system itself with strong authentication and validate MFA configurations before deployment to ensure it requires two different forms of authentication and does not allow any access without a second factor.

Use of application and system accounts and associated authentication factors is strictly managed.

Whenever possible, disable interactive login on system and application accounts to prevent malicious actors from logging in with them.

Requirement 9: Restrict physical access to cardholder data

Malicious actors could gain access to cardholder data by physically interacting with payment devices or tampering with the hardware infrastructure that stores and processes that data. These PCI DSS 4.0 requirements outline how to prevent physical data access.

Stipulations for Compliance

Best Practices

Physical access controls manage entry into facilities and systems containing cardholder data.

Use logical or physical controls to prevent unauthorized users from connecting to network jacks and wireless access points within the CDE facility.

Physical access for personnel and visitors is authorized and managed.

Require visitor badges and an authorized escort for any third parties accessing the CDE facility, and keep an accurate log of when they enter and exit the building.

Media with cardholder data is securely stored, accessed, distributed, and destroyed.

Do not allow portable media containing cardholder data to leave the secure facility unless absolutely necessary.

Point of interaction (POI) devices are protected from tampering and unauthorized substitution.

Use a centralized, vendor-neutral asset management system to automatically discover and track all POI devices in use across the organization.

Use of application and system accounts and associated authentication factors is strictly managed.

Whenever possible, disable interactive login on system and application accounts to prevent malicious actors from logging in with them.

Regularly monitor and test networks

Requirement 10: Log and monitor all access to system components and cardholder data

User activity logging and monitoring will help prevent, detect, and mitigate CDE breaches. PCI DSS 4.0 requires organizations to collect, protect, and review audit logs of all user activities in the CDE.

Stipulations for Compliance

Best Practices

Audit logs are implemented to support the detection of anomalies and suspicious activity, and the forensic analysis of events.

Use a user and entity behavior analytics (UEBA) solution to monitor user activity and detect suspicious behavior with machine learning algorithms.

Audit logs are protected from destruction and unauthorized modifications.

Never store audit logs in public-accessible locations; use strong RBAC and least-privilege policies to limit access.

Audit logs are reviewed to identify anomalies or suspicious activity.

Use an AIOps tool to analyze audit logs, detect anomalous activity, and automatically triage and notify teams of issues.

Audit log history is retained and available for analysis.

Retain audit logs for at least 12 months in a secure storage location; keep the last three months of logs immediately accessible to aid in breach resolution.

Time-synchronization mechanisms support consistent time settings across all systems.

Use NTP to synchronize clocks across all systems to help with breach mitigation and post-incident forensics.

Failures of critical security control systems are detected, reported, and responded to promptly.

Use AIOps to automatically detect, triage, and respond to security incidents. AIOps also provides automatic root-cause analysis (RCA) for faster incident resolution.

Requirement 11: Test security of systems and network regularly

Researchers and attackers continuously discover new vulnerabilities in systems and software, so organizations must frequently test network components, applications, and processes to ensure that in-place security controls are still adequate. ge changes; ensure alerts are monitored.

Stipulations for Compliance

Best Practices

Wireless access points are identified and monitored, and unauthorized wireless access points are addressed.

Use a wireless analyzer to detect rogue access points.

External and internal vulnerabilities are regularly identified, prioritized, and addressed.

PCI DSS 4.0 requires internal and external vulnerability scans at least once every three months, but performing them more often is encouraged if your network is complex or changes frequently.

External and internal penetration testing is regularly performed, and exploitable vulnerabilities and security weaknesses are corrected.

Work with a PCI DSS-approved vendor to perform external and internal penetration testing; conduct pen testing on network segmentation controls.

Network intrusions and unexpected file changes are detected and responded to.

Use AI-powered, next-generation firewalls (NGFWs) with enhanced detection algorithms and automatic incident response capabilities.

Unauthorized changes on payment pages are detected and responded to.

Use anti-skimming technology like file integrity monitoring (FIM) to detect unauthorized payment page changes; ensure alerts are monitored.

Maintain an information security policy

Requirement 12: Support information security with organizational policies and programs

The final requirement is to implement information security policies and programs to support the processes described above and get everyone on the same page about their responsibilities regarding cardholder data privacy.

Stipulations for Compliance

Best Practices

Acceptable use policies for end-user technologies are defined and implemented.

Enforce usage policies with technical controls capable of locking users out of systems, applications, or devices if they violate these policies.

Risks to the cardholder data and environment are formally identified, evaluated, and managed.

Use a centralized patch management system to monitor firmware and software versions, detect changes that may increase risk, and deploy updates to fix vulnerabilities.

PCI DSS compliance is managed.

Service providers must assign executive responsibility for managing PCI DSS 4.0 compliance.

PCI DSS scope is documented and validated.

Frequently validate PCI DSS scope by evaluating the CDE and all connected systems to determine if coverage should be expanded.

Security awareness education is an ongoing activity.

Require all users to take security awareness training upon hire and every year afterwards; it’s also recommended to provide refresher training when someone transfers into a role with more access to sensitive data.

Personnel are screened to reduce risks from insider threats.

In addition to screening new hires, conduct additional screening when someone moves into a role with greater access to the CDE.

Risk to information assets associated with third-party service provider (TPSP) relationships is managed.

Thoroughly analyze the risk of working with third-parties based on their reporting practices, breach history, incident response procedures, and PCI DSS validation.

Third-party service providers (TPSPs) support their customers’ PCI DSS compliance.

Require TPSPs to provide their PCI DSS Attestation of Compliance (AOC) to demonstrate their compliance status.

Suspected and confirmed security incidents that could impact the CDE are responded to immediately.

Create a comprehensive incident response plan that designates roles to key stakeholders.

Isolate your CDE and management infrastructure with Nodegrid

The Nodegrid out-of-band (OOB) management platform from ZPE Systems isolates your control plane and provides a safe environment for cardholder data, management infrastructure, and ransomware recovery. Our vendor-neutral, Gen 3 OOB solution allows you to host third-party tools for automation, security, troubleshooting, and more for ultimate efficiency.

Ready to know more about PCI DSS 4.0 Requirements?

Learn how to meet PCI DSS 4.0 requirements for network segmentation and security by downloading our isolated management infrastructure (IMI) solution guide.
Download the Guide

The post PCI DSS 4.0 Requirements appeared first on ZPE Systems.

]]>