cybersecurity

Cloud Native Security: Cloud Native Application Protection Platforms

Back in 2022, 77% of interviewed CIOs stated that their IT environment is constantly changing. We can only guess that this number, would the respondents be asked today, will be as high as 90%+. Detecting flaws and security vulnerabilities becomes more and more challenging in 2023 since the complexity of typical software deployment is exponentially increasing year to year. The relatively new trend of Cloud Native Application Protection Platforms (CNAPP) is now supported by the majority of cybersecurity companies, offering their CNAPP solutions for cloud and on-prem deployments.

CNAPP rapid growth is driven by cybersecurity threats, while misconfiguration is one of the most reported reasons for security breaches and data loss. While workloads and data move to the cloud, the required skill sets of IT and DevOps teams must also become much more specialized. The likelihood of an unintentional misconfiguration is increased because the majority of seasoned IT workers still have more expertise and got more training on-prem than in the cloud. In contrast, a young “cloud-native” DevOps professional has very little knowledge of “traditional” security like network segmentation or firewall configuration, which will typically result in configuration errors.

Some CNAPP are proud to be “Agentless” eliminating the need to install and manage agents that can cause various issues, from machine’ overload to agent vulnerabilities due to security flows and, guess what, due to the agent’s misconfiguration. Agentless monitoring has its benefits but it is not free of risks. Any monitored device should be “open” for such monitoring, typically coming from a remote server. If an adversary was able to fake a monitoring attempt, he can easily get access to all the monitored devices and compromise the entire network. So “agentless CNAPP” does not automatically mean a better solution than a competing security platform. Easier for maintenance by IT staff? Yes, it is. Is it more secure? Probably not.

Cloud Native Security: Cloud Native Application Protection Platforms Read More »

Machine Learning for Network Security, Detection and Response

Cybersecurity is the defense mechanism used to prevent malicious attacks on computers and electronic devices. As technology becomes more advanced, it will require more complex skills to detect malicious activities and computer networks’ flaws. This is where machine learning can help.

Machine learning is a subset of artificial intelligence that uses algorithms and statistical analysis to make assumptions about a computer’s behavior. It can help organizations address new security challenges, such as scaling up security solutions, detecting unknown and advanced attacks, and identifying trends and anomalies. Machine learning can also help defenders more accurately detect and triage potential attacks, but it may bring new attack surfaces of its own.

Machine learning can be used to detect malware in encrypted traffic, find insider threat, predict “bad neighborhoods” online, and protect data in the cloud by uncovering suspicious user behavior. However, machine learning is not a silver bullet for cybersecurity. It depends on the quality and quantity of the data used to train the models, as well as the robustness and adaptability of the algorithms.

A common challenge faced by machine learning in cybersecurity is dealing with false positives, which are benign events that are mistakenly flagged as malicious. False positives can overwhelm analysts and reduce their trust in the system. To overcome this challenge, machine learning models need to be constantly updated and validated with new data and feedback.

Another challenge is detecting unknown or zero-day attacks, which are exploits that take advantage of vulnerabilities that have not been discovered or patched yet. Traditional security solutions based on signatures or rules may not be able to detect these attacks, as they rely on prior knowledge of the threat. Machine learning can help to discover new attack patterns or adversary behaviors by using techniques such as anomaly detection, clustering, or reinforcement learning.

Anomaly detection is the process of identifying events or observations that deviate from the normal or expected behavior of the system. For example, machine learning can detect unusual network traffic, login attempts, or file modifications that may indicate a breach.

Clustering is the process of grouping data points based on their similarity or proximity. For example, machine learning can cluster malicious domains or IP addresses based on their features or activities, and flag them as “bad neighborhoods” online.

Reinforcement learning is the process of learning by trial and error, aiming to maximize a cumulative reward. For example, machine learning can learn to optimize the defense strategy of a system by observing the outcomes of different actions and adjusting accordingly.

Machine learning can also leverage statistics, time, and correlation-based detections to enhance its performance. These indicators can help to reduce false positives, identify causal relationships, and provide context for the events. For example, machine learning can use statistical methods to calculate the probability of an event being malicious based on its frequency or distribution. It can also use temporal methods to analyze the sequence or duration of events and detect anomalies or patterns. Furthermore, it can use correlation methods to link events across different sources or domains and reveal hidden connections or dependencies.

Machine learning is a powerful tool for cybersecurity, but it also requires careful design, implementation, and evaluation. It is not a one-size-fits-all solution, but rather a complementary approach that can augment human intelligence and expertise. Machine learning can help to properly navigate the digital ocean of incoming security events, particularly where 90% of them are false positives. The need for real-time security stream processing is now bigger than ever.

Machine Learning for Network Security, Detection and Response Read More »

Gartner: “it is the user, not the cloud provider” who causes data breaches

Gartner’s recommendations on cloud computing strategy open the rightful discussion on the roles and responsibilities of different actors involved in cloud security. How many security and data breaches happen due to Cloud Service Providers (CSP) flaws, and how many of them are caused by CSP’s customers and human beings dealing with the cloud on a daily base? Gartner predicts that through 2025 99% of cloud security failures will be the customer’s fault. Such a prediction can only be based on the current numbers that obviously should demonstrate that the vast majority of breaches come due to CSP clients’ issues.

Among other reason, the first place is taken by data breaches coming from misconfiguration of the cloud environment and security flaws in software that were missed by DevOps and IT teams working in the cloud.

While the workloads and data keep moving to the cloud, DevOps and IT teams often lack the required skill sets to properly configure and maintain cloud-based software. The likelihood of an unintentional misconfiguration is increased because the majority of seasoned IT workers have significantly more expertise and training with on-premises security than they do with the cloud. While younger, less experienced workers may be more acclimated to publishing data to the cloud, they may not be as familiar with dealing with security, which might result in configuration errors.

Some of the team members have near heard of the Roles Based Access Control (RBAC) principle and will have real trouble working in the cloud like AWS being required to properly set up IAM users and IAM roles for each software component and service. These DevOps and IT engineers need to take intensive training to close the cloud security gap. Until it is done the enterprise will keep struggling from improper configuration, production failures and periodic security breaches.

Simple solutions like a firewall can add an additional degree of security for data and workloads, either for on-prem, hybrid, or pure cloud deployments. And yet, even simple things like that add another dimension of IT complexity and risk due to possible misconfiguration because of a human mistake or a vulnerable historical software package.

Gartner: “it is the user, not the cloud provider” who causes data breaches Read More »

Full Stack IT Observability Will Drive Business Performance in 2023

Cisco predicts that 2023 will be shaped by a few exciting trends in technology, including network observability with business correlation. Cisco’s EVP & Chief Strategy Officer Liz Centoni is sure that

To survive and thrive, companies need to be able to tie data insights derived from normal IT operations directly to business outcomes or risk being overtaken by more innovative competitors

and we cannot agree more.

Proper intelligent monitoring of digital assets along with distributed tracing should be tightly connected to the business context of the enterprise. Thus, any organization can benefit from actionable business insights while improving online and digital user experience for customers, employees, and contractors. Additionally, fast IT response based on artificial intelligence data analysis of monitored and collected network and assets events can prevent or at least provide fast remediation for the most common security threat that exists in nearly any modern digital organization: misconfiguration. 79% of firms have already experienced a data breach in the past 2 years, while 67% of them pointed to security misconfiguration as the main reason.

Misconfiguration of most software products can be timely detected and fixed with data collection and machine learning of network events and configuration files analyzed by network observability and network monitoring tools. An enterprise should require its IT departments to reach full stack observability, and connect the results with the business context. It is particularly important since we know that 99% of cloud security failures are customers’ mistakes (source: Gartner). Business context should be widely adopted as a part of the results delivered by intelligent observability and cybersecurity solutions.

Full Stack IT Observability Will Drive Business Performance in 2023 Read More »

Cloud Monitoring Market Size Estimations

According to a marketing study, the global IT infrastructure monitoring market is supposed to grow at 13.6% CAGR reaching USD $64.5 in 2031. Modern IT infrastructure becomes increasingly more complex and requires new skills from IT personnel, often blurring the borders between IT staff, DevOps, and development teams. With the continued move from on-prem deployments to the enterprise cloud, IT infrastructure goes to the cloud as well, and thus IT teams have to learn basic cloud-DevOps skills, such as scripting, cloud-based scaling, events creation, and monitoring. Furthermore, no company today offers a complete monitoring solution that can monitor any network device and software component.

Thus, IT teams have to build their monitoring solutions piece by piece, using various mostly not interconnected systems, developed by different, often competing vendors. For some organizations, it also comes to compliance, such as GDPR or ISO requirements, and to SLAs that obligate the IT department to timely detect, report, and fix any issue with their systems. In this challenging multi-system and multi-device environment, network observability becomes the key to enterprise success. IT organizations keep increasing their budgets seeking to reach the comprehensive cloud and on-prem monitoring for their systems and devices, and force the employees to run network and device monitoring software on their personal devices, such as mobile phones and laptops. This trend also increases the IT spend on cybersecurity solutions such as SDR and network security analysis with various SIEM tools.

Cloud Monitoring Market Size Estimations Read More »

Strategies to Combat Emerging Gaps in Cloud Security

As cloud clients input 2023 with a hybrid presence in multiple clouds, they work on prioritizing techniques to fight rising gaps in cloud security.

Most big agencies are getting access to cloud offerings in numerous public clouds, whilst preserving organization structures and personal clouds of their company’s facts centers.

One of the ways of closing these gaps in security could be adopting deep observability. We have already reviewed a few Deep Observability providers such as Gigamon. While Gigamon probably can be considered a current market leader in this relatively new and small market with under $2B annual market size, they still should watch out for the newcomers who come with shiny new products and great technologies under the hood.

CtrlStack is one of these startups and they recently got a second round of funding from Lightspeed VC, led by Kearny Jackson and Webb Investment Network.

The delivery of features and applications by today’s digital-first companies and developers is accelerating. Teams from information technology operations and software development must collaborate closely to do this, forming a practice known as DevOps. When events occur, they may involve any number of digital environment systems, including operations, infrastructure, code, or any combination of modifications made to any of them.

The CtrlStack platform connects cause and effect to make troubleshooting easier and incident root cause analysis faster by tracking relationships between components in a customer’s systems. Developers and engineers can solve problems quickly by giving DevOps teams the tools they need.

By forming an understanding graph of all of the infrastructure, interconnected offerings, and impact, CtrlStack can supply the full picture while capturing the devices’ modifications and relationships throughout the whole device stack. Using CtrlStack product DevOps groups can view dependencies, measure the impact of modifications and examine occasions in actual time.

Key capabilities of the platform encompass an occasion timeline that permits groups to browse and clear out out extrade occasions, without having to sift via log documents or survey users, and a visual representation that offers insights into operational data. Both of those capabilities additionally force dashboards for builders and DevOps groups.

Developers can also access their dashboards that give visibility for any modifications to code commits, configuration documents, or function flags, – all in one click. DevOps groups get a dashboard for root reason evaluation that permits them to seize all of the context for the time being they came about with a searchable timeline of dependencies displaying the whole impacted topology and impacted metrics.

Strategies to Combat Emerging Gaps in Cloud Security Read More »

Deep Observability and Zero Trust

Zero trust architecture has established itself as a highly recognized method of safeguarding both on-premises systems and the cloud in response to the exponential rise in ransomware and other cyber threats. In example, although only 51% of EMEA IT and security professionals said they were confident implementing zero trust in 2019, that percentage increased noticeably to 83% in 2022.

The implicit trust that is placed in internal network traffic, people, or devices is eliminated by a zero trust architecture, to put it simply. Businesses can increase both productivity and security with this defense / defense in depth approach to security.

For businesses, implicit confidence in the technology stack can be a major problem. IT teams frequently struggle to put the right trust controls in place because they typically assume that the company owns the system, that all users are employees, or that the network was previously safe. These trust indicators, however, are insufficient. Organizations are becoming more exposed to risk as a result of trust built on assumptions. These careless measurements of trust can be utilized by threat actors against a company to facilitate network intrusion and data breaches.

A zero trust framework gets rid of any implicit trust and instead determines whether a company should grant access in each specific situation. It is more crucial now that bring-your-own-device (BYOD) initiatives have become so popular due to the rise of remote and hybrid working.

To increase the effectiveness of metric, event, log, and trace-based monitoring and observability tools and reduce risk, deep observability is the addition of real-time network-level intelligence. With it comes more insight to strengthen a company’s security posture since deep observability enables security professionals to examine the metadata that threat actors leave behind after evading endpoint detection and response systems or SIEMs. Therefore, it is essential to support a thorough zero trust strategy.

In the end, zero trust’s primary objective is to identify and categorize all network-connected devices, not only those that have endpoint agents installed and functioning, and to tightly enforce a least-privilege access policy based on a detailed analysis of the device. This cannot be done for devices or users that you can not access.

Deep Observability and Zero Trust Read More »

Growth in Deep Observability Services

Gigamon, a deep observability company has recently reported a 100% YoY increase in Deep Observability Pipeline ARR. Some market researches estimate the Deep Observability market size as $2B in 2026.  According to Gigamon, the company leads this market with 68% market share. Cybersecurity solutions should adequately protect the entire system while eliminating blind spots. This becomes particularly challenging in multi-cloud and hybrid environments. SOC teams need historical data and insights into attackers’ strategies, and most of all they need time to properly prepare and respond. With customers like US Department of Defense and Lockheed Martin, Gigamon looks well-packed to deliver high-quality Deep Observability network solutions and AI-based threat detection to the US and international markets.

Growth in Deep Observability Services Read More »

Yet Another Investment in a Cloud Network Monitoring and Cyberdefense Startup

SynSaber has recently announced a $13 million series A investment. SynSaber is an early-stage cybersecurity and network monitoring company that develops OT visibility and detection solutions for machine learning cloud monitoring and network observability. SynSaber develops vendor-agnostic software for critical cloud and edge infrastructure that allows sending OT data to empower SIEM, SOAR, or MSSP. Cloud edge assets are often targeted by cybercriminals and SynSaber provides a new line of defense and a solution for intelligent cloud monitoring on the edge.

The latest round brings total investment in the startup to $15.5 million. SynSaber is well positioned on the industrial asset and cloud edge and network monitoring market. The company expands its global footprint and gains market momentum.

SynSyber’s  H1-2022 report shows the efficiency of the startup’s solution, which uncovers that 13% of CVEs reported in 2022 have no patch or fix currently available from the vendor, while 34% of CVEs could only be patched after a firmware update. Furthermore, 23% of CVEs require local or physical access to the system. These numbers demonstrate the growing need for sophisticated fully automated machine learning cloud monitoring solutions for edge computing, hybrid and private clouds. Intelligent computing edge and cloud monitoring help timely detect infrastructure issues, including security flaws and misconfiguration issues, and fix them before they are exploited by cybercriminals.

Yet Another Investment in a Cloud Network Monitoring and Cyberdefense Startup Read More »