artificial intelligence

Beyond Prediction: Understanding and Debugging Large Language Models

Recently, I came across insightful articles on this topic, highlighting the significance of observability in Generative AI. One article, “Transform Large Language Model Observability with Langfuse” on AWS, discusses how Langfuse can be used to achieve this. Another valuable resource is “Observability for Generative AI” from IBM Think Insights, which provides a broader perspective on the challenges and techniques in this evolving field.

Within these discussions, several key tools and techniques are often referenced. For those looking to delve deeper, here are a few important concepts and potential starting points for further exploration:

Langfuse: An open-source LLM engineering platform focused on observability, evaluation, and prompt management. It helps in debugging and improving LLM applications by providing tracing, metrics, and a playground. You can find more information on their platform at https://langfuse.com/.

Prompt Engineering: The art and science of designing effective prompts to guide LLMs towards desired outputs. Understanding prompt engineering is fundamental to getting the most out of these models. Resources like the Google Cloud guide on Prompt Engineering at https://cloud.google.com/discover/what-is-prompt-engineering offer valuable insights.

Retrieval Augmented Generation (RAG): A technique that enhances LLMs by grounding their responses in external knowledge sources, improving factual accuracy and reducing hallucinations. Learn more about RAG in Google Cloud’s explanation at https://cloud.google.com/use-cases/retrieval-augmented-generation.

Chain of Thought Prompting: A prompting strategy that encourages LLMs to break down complex problems into intermediate reasoning steps, leading to more accurate and transparent solutions. Microsoft’s documentation on .NET and Chain of Thought Prompting at https://learn.microsoft.com/en-us/dotnet/ai/conceptual/chain-of-thought-prompting provides a good overview.

Fine-tuning LLMs: The process of further training pre-trained LLMs on specific datasets to improve their performance on particular tasks or domains. SuperAnnotate’s blog post on “Fine-tuning large language models (LLMs) in 2025” at https://www.superannotate.com/blog/llm-fine-tuning offers insights into this crucial technique.

LLM observability is a rapidly evolving field, and staying updated with the latest tools and techniques is essential for anyone working with these powerful models. The ability to effectively monitor, debug, and evaluate LLM performance will be key to unlocking their full potential across various applications.

Beyond Prediction: Understanding and Debugging Large Language Models Read More »

Meta AI’s Brain2Qwerty: A Leap Forward in Non-Invasive Sentence Decoding

Introduction

Meta AI’s recent introduction of Brain2Qwerty marks a significant advancement in the field of non-invasive brain-computer interfaces (BCIs). This innovative system demonstrates the potential to translate neural activity directly into text, offering a new avenue for communication for individuals with motor impairments. By leveraging magnetoencephalography (MEG) and deep learning, Brain2Qwerty achieves a notable milestone in decoding full sentences from brain signals.

How Brain2Qwerty Works

At the core of Brain2Qwerty lies a sophisticated deep learning model trained on MEG data collected while participants silently imagined themselves typing sentences. The model learns to map the complex patterns of brain activity associated with different letter sequences, effectively deciphering the intended words from the neural signals.

Key Features and Advancements

  • Non-Invasive Approach: Unlike invasive methods that require brain implants, MEG is a non-invasive technique that measures brain activity from outside the skull, making it safer and more accessible for potential users.
  • Full Sentence Decoding: Brain2Qwerty goes beyond previous attempts at decoding single words or letters, enabling users to express themselves more naturally and fluently.
  • High Accuracy: The system demonstrates promising accuracy in decoding sentences, showcasing the potential for practical applications.

Comparison with Other Approaches

Brain2Qwerty builds upon previous research in BCI technology, offering several key advantages:

  • Improved Accuracy: Compared to earlier systems that relied on invasive methods or focused on decoding individual letters or words, Brain2Qwerty achieves significantly higher accuracy in decoding full sentences.
  • Non-Invasive Nature: The use of MEG eliminates the risks associated with invasive brain implants, making the technology more accessible and potentially safer for a wider range of users.
  • Potential for Real-World Applications: The ability to decode full sentences opens up new possibilities for real-world applications, such as enabling communication for individuals with severe motor impairments.

Challenges and Future Directions

While Brain2Qwerty represents a significant step forward, several challenges remain:

  • Generalization: The current system is trained on a limited dataset and may not generalize well to new users or different languages.
  • Real-Time Performance: Real-time decoding of complex sentences remains a challenge, requiring further optimization of the model and hardware.
  • Ethical Considerations: As with any emerging BCI technology, it is crucial to address ethical concerns related to privacy, data security, and potential misuse.

Conclusion

Meta AI’s Brain2Qwerty is a groundbreaking achievement in the field of non-invasive BCI technology. By demonstrating the feasibility of decoding full sentences from brain signals, this system offers a glimpse into a future where individuals with motor impairments can communicate more freely and naturally. While challenges remain, continued research and development in this area hold the promise of transforming the lives of many people.

Additional Considerations

  • Integration with Other Technologies: Future research could explore the integration of Brain2Qwerty with other assistive technologies, such as speech synthesis or text-to-speech systems, to create more comprehensive communication solutions.
  • Clinical Applications: The potential clinical applications of Brain2Qwerty are vast, including enabling communication for individuals with locked-in syndrome, amyotrophic lateral sclerosis (ALS), and other neurological conditions.
  • Societal Impact: The development of advanced BCI technologies raises important societal questions about the ethical implications of mind-reading technologies and the potential impact on human autonomy and privacy.

By addressing these challenges and exploring the full potential of this technology, researchers can pave the way for a future where brain-computer interfaces play a transformative role in enhancing human communication and well-being.

Meta AI’s Brain2Qwerty: A Leap Forward in Non-Invasive Sentence Decoding Read More »

The Democratization of AI: Open Source, Multilingual Models, and Empowering Developers

Introduction

The landscape of artificial intelligence is rapidly evolving, shifting from closed, proprietary systems to a more open, accessible, and collaborative ecosystem. This transformation is driven by the increasing availability of powerful open-source tools and models, enabling developers worldwide to innovate and build cutting-edge AI applications. In this post, we’ll delve into the key trends shaping modern AI, with a spotlight on open-source contributions and the impact of multilingual large language models (LLMs) like Alibaba’s Babel.

The Rise of Open-Source AI

Open source has been a cornerstone of software development for decades, and its influence is now profoundly impacting the AI domain. The benefits are clear:

  • Accessibility: Open-source projects lower the barrier to entry, allowing developers to experiment and learn without substantial financial investments.
  • Collaboration: A global community of developers contributes to and improves open-source tools, leading to faster innovation and higher quality.
  • Transparency: Open-source code allows for scrutiny and verification, fostering trust and accountability.
  • Customization: Developers can adapt and modify open-source tools to meet specific needs, enabling tailored solutions.

Key Open-Source Technologies

Let’s explore some of the critical open-source technologies that are driving AI development:

  • TensorFlow and PyTorch: These are the dominant deep learning frameworks, providing comprehensive tools for building and training neural networks.
  • Example (PyTorch):

import torch
import torch.nn as nn

# Simple linear model
model = nn.Linear(10, 1)
input_tensor = torch.randn(1, 10)
output = model(input_tensor)
print(output)

  • Hugging Face Transformers: This library provides pre-trained models and tools for natural language processing (NLP), making it easier to work with LLMs.
    • Example (Hugging Face):

from transformers import pipeline

classifier = pipeline("sentiment-analysis")
result = classifier("I love using open-source AI!")
print(result)

  • Scikit-learn: A versatile machine learning library for tasks like classification, regression, and clustering.
    • Example (Scikit-learn):

from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression

# Load dataset
iris = datasets.load_iris()
X, y = iris.data, iris.target

# split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

# train model
model = LogisticRegression()
model.fit(X_train, y_train)

# evaluate model
accuracy = model.score(X_test, y_test)
print(f"Accuracy: {accuracy}")

Alibaba’s Babel: A Multilingual Leap

One of the most significant developments in the accessibility of AI is the emergence of multilingual LLMs. Alibaba’s Babel is a prime example, designed to serve over 90% of global speakers. This represents a monumental step towards breaking down language barriers in AI applications.

  • Global Reach: Babel’s support for a wide range of languages enables developers to create AI solutions that cater to diverse audiences.
  • Enhanced Communication: Multilingual LLMs facilitate natural and seamless communication between humans and AI systems.
  • Cultural Sensitivity: By understanding and generating text in multiple languages, these models can be tailored to specific cultural contexts.

The Impact on Developers

The availability of open-source tools and multilingual models has a profound impact on developers:

  • Faster Development Cycles: Pre-trained models and libraries reduce the time and effort required to build AI applications.
  • Increased Innovation: Developers can focus on creating novel solutions rather than reinventing the wheel.
  • Democratized Access: Developers from all backgrounds can participate in the AI revolution, regardless of their resources.
  • Global Applications: multilingual models allow developers to create applications that can be used by a truly global audience.

Ethical Considerations

As AI becomes more powerful and accessible, it’s crucial to address ethical considerations:

  • Bias and Fairness: Developers must be mindful of potential biases in data and models, and strive to create fair and equitable AI systems.
  • Privacy and Security: Protecting user data and ensuring the security of AI systems is paramount.
  • Responsible Use: Developers must consider the potential impact of their AI applications and ensure they are used responsibly.

The modern development of AI is characterized by a strong emphasis on open source, accessibility, and multilingual capabilities. Technologies like TensorFlow, PyTorch, Hugging Face Transformers, and Alibaba’s Babel are empowering developers to create innovative AI solutions that can benefit people worldwide. As the AI landscape continues to evolve, it’s essential to embrace collaboration, ethical considerations, and a commitment to democratizing access to this transformative technology.

The Democratization of AI: Open Source, Multilingual Models, and Empowering Developers Read More »

A Holistic Approach to Network Observability: Beyond the “Five Steps”

In a recent article on BetaNews, Song Pang outlines five steps to achieve network observability: Network Discovery and Data Accuracy, Network Visualizations, Network Design and Assurance, Automation, and Observability. While these steps provide a solid foundation, we believe there are alternative approaches that can be more effective, especially in today’s rapidly evolving network environments. Here, we propose a different set of steps and actions to achieve network observability, explaining why this approach might be superior with clear examples and historical facts.

BetaNews approach focuses on accurate data from logs, traces, traffic paths, and SNMP. We suggest getting a wider system’ view: instead of just focusing on traditional data sources, integrate data from a wider array of sources including cloud services, IoT devices, and user behavior analytics. This holistic view ensures that no part of the network is overlooked.

(C) Image copyright PacketAI & DALL-E
Advanced Automated Network Monitoring Image copyright (C) 2024 PacketAI and DALL-E

For example, back in 2016, a major retail company faced a significant data breach because their network monitoring only covered traditional data sources. By integrating data from IoT devices and user behavior analytics, they could have detected the anomaly earlier.

Real-Time Anomaly Detection with AI

BetaNews approach emphasizes network visualizations and manual baselines. This is great as a start, but you should consider implementing an AI-driven real-time anomaly detection. AI can learn normal network behavior and detect deviations instantly, reducing the time to identify and resolve issues.
In 2020, a financial institution implemented AI-driven anomaly detection, which reduced their mean time to resolution (MTTR) by 40% compared to their previous manual baseline approach.

Proactive Incident Response

BetaNews did not suggest that, but you should be ahead of any network issue. Develop a proactive incident response strategy that includes automated responses to common issues. This reduces downtime and ensures quicker recovery from incidents. A tech company in 2018 implemented automated incident response for their network. This proactive approach reduced their downtime by 30% during network outages.

Continuous Improvement and Feedback Loops

Establish continuous improvement and feedback loops. Regularly review and update network policies and configurations based on the latest data and trends.
In 2019, a healthcare provider adopted continuous improvement practices for their network observability. This led to a 25% improvement in network performance over a year.

User-Centric Observability

While BetaNews approach ends with achieving observability, you can focus on user-centric observability. Ensure that the network observability strategy aligns with user experience and business goals. This ensures that the network not only functions well but also supports the overall objectives of the organization.
A global e-commerce company in 2021 shifted their focus to user-centric observability. This alignment with business goals led to a 20% increase in customer satisfaction and a 15% boost in sales.

Common Mistakes in Network Monitoring

While striving for network observability, it’s crucial to be aware of common mistakes that can undermine your efforts:
Many teams adopt a reactive stance, addressing threats only after they occur. This can leave networks vulnerable to evolving threats. A proactive approach, constantly updating antivirus and cybersecurity practices, is essential.

  • Focusing solely on devices and neglecting applications can lead to incomplete visibility.
  • Monitoring both devices and applications ensures a comprehensive view of network performance and potential vulnerabilities.
  • Failing to monitor network logs can result in missed signs of breaches or performance issues. Regular log analysis is crucial for early detection of anomalies.
  • Not anticipating network expansion can lead to scalability issues. Planning for growth ensures that the network can handle increased traffic and new devices.
  • Using outdated tools can leave networks exposed to new types of threats. Regularly updating and upgrading monitoring tools is vital to maintain robust security.

Conclusion

While the five steps outlined by BetaNews provide a structured approach to network observability, the alternative steps proposed here offer a more comprehensive, proactive, and user-centric strategy. By integrating diverse data sources, leveraging AI, implementing proactive incident response, establishing continuous improvement practices, and focusing on user experience, organizations can achieve a higher level of network observability that not only ensures network performance but also supports business objectives.

A Holistic Approach to Network Observability: Beyond the “Five Steps” Read More »

The Growing DevSecOps Market: Current Trends and Future Prospects

The DevSecOps market is experiencing significant growth, driven by the increasing demand for secure software development practices. According to recent research, the market is projected to reach a staggering US$ 45.93 billion by 2032, growing at a CAGR of 24.7%. This rapid expansion underscores the critical role of integrating security into the DevOps process, ensuring that applications are secure from the outset.

Current Popular DevSecOps Solutions

Several DevSecOps solutions are currently leading the market, each offering unique features to enhance security throughout the software development lifecycle:

1. Jenkins: Widely adopted for continuous integration and continuous delivery (CI/CD), Jenkins automates various aspects of software development, ensuring security checks are integrated seamlessly.

2. Aqua Security: This platform focuses on cloud-native applications, providing comprehensive CI/CD integration and thorough vulnerability scanning.

3. Checkmarx: Known for its robust static code analysis capabilities, Checkmarx helps identify vulnerabilities early in the development process.

4. SonarQube: An open-source tool that offers static code analysis, SonarQube is popular for its ability to detect code quality issues and security vulnerabilities.

 

Emerging Trends and Future Solutions

Looking ahead, several trends and emerging solutions are poised to shape the DevSecOps landscape over the next 24 months:

  1. Automation and AI Integration: Automation will continue to drive efficiency in DevSecOps, with AI playing a crucial role in threat detection and response. This trend will enable faster identification and remediation of security issues.
  2. Tool Consolidation: Organizations are moving towards consolidating their security tools to streamline processes and reduce costs. This approach will enhance the overall security posture by providing a unified view of the security landscape.
  3. Infrastructure as Code (IaC): The adoption of IaC is expected to grow, allowing for more consistent and secure infrastructure management. This practice ensures that security is embedded in the infrastructure from the beginning.
  4. Shift-Left Security: Emphasizing security earlier in the development process, known as “shift-left” security, will become more prevalent. This approach helps in identifying and addressing vulnerabilities before they become critical issues.

Conclusion

The DevSecOps market is on a robust growth trajectory, driven by the need for secure software development practices. Current solutions like Jenkins, Aqua Security, Checkmarx, and SonarQube are leading the way, while emerging trends such as automation, tool consolidation, IaC, and shift-left security are set to shape the future. As organizations continue to prioritize security, the DevSecOps market will undoubtedly see further innovation and expansion.

References:

1. DevSecOps Market Size Worth US$ 45.93 Billion by 2032
2.25 Top DevSecOps Tools (Ultimate Guide for 2024)
3.13 Best DevSecOps Tools for 2024 (Paid & Free)
4.DevSecOps Trends for 2024
5.The Future of DevSecOps: Emerging Trends in 2024 and Beyond

The Growing DevSecOps Market: Current Trends and Future Prospects Read More »

Comparing No-Code Mobile Platforms: GoodBarber and Beyond

In the ever-evolving world of mobile app development, no-code platforms have emerged as game-changers, enabling individuals and businesses to create fully functional mobile apps without writing a single line of code. This blog post will compare some of the leading no-code mobile platforms, with a special focus on GoodBarber, to help you choose the best tool for your needs.

GoodBarber: A Closer Look

GoodBarber is a popular no-code platform that allows users to build professional-grade mobile apps quickly and efficiently. According to Geeky Gadgets, you can create a fully functional app in under 30 minutes using GoodBarber. Here are some of its standout features:

• User-Friendly Interface: GoodBarber offers an intuitive drag-and-drop interface, making it accessible even for those with no technical background.

• Customization Options: The platform provides a wide range of design templates and customization options, allowing users to create unique and visually appealing apps.

• Advanced Features: GoodBarber supports features like push notifications, geofencing, and in-app purchases, which are essential for modern mobile apps.

• Customer Support: Users have praised GoodBarber’s responsive customer support, which is crucial for resolving issues quickly.

Other Leading No-Code Platforms

While GoodBarber is a strong contender, several other no-code platforms offer unique features and capabilities. Here’s a comparison of some of the top alternatives:

Bubble.io

• Strengths: Highly flexible and powerful, suitable for complex applications.

• Weaknesses: Steeper learning curve compared to other no-code platforms.

• Ideal For: Users who need extensive customization and are willing to invest time in learning the platform.

FlutterFlow

• Strengths: Integrates seamlessly with Google’s Firebase, supports both web and mobile apps.

• Weaknesses: Limited design customization compared to GoodBarber.

• Ideal For: Developers looking for a platform that supports both web and mobile app development.

Adalo

• Strengths: Very user-friendly, great for beginners.

• Weaknesses: Limited scalability for larger projects.

• Ideal For: Small businesses and startups looking to create simple apps quickly.

Glide

• Strengths: Excellent for creating data-driven apps using Google Sheets.

• Weaknesses: Limited to the functionalities provided by Google Sheets.

• Ideal For: Users who need to create apps that are heavily reliant on data.

Webflow

• Strengths: Powerful design capabilities, great for web apps.

• Weaknesses: Not specifically tailored for mobile app development.

• Ideal For: Designers and developers focused on web applications.

Key Considerations When Choosing a No-Code Platform

When selecting a no-code platform, consider the following factors:

• Ease of Use: How intuitive is the platform? Can you start building immediately, or is there a steep learning curve?

• Customization: Does the platform offer enough design and functionality customization to meet your needs?

• Scalability: Can the platform handle the growth of your app, or will you need to switch to a more robust solution as your user base expands?

• Support and Community: Is there a strong support system and active community to help you troubleshoot and improve your app?

Conclusion

No-code platforms like GoodBarber have democratized mobile app development, making it accessible to a broader audience. Whether you’re a small business owner, a startup founder, or an individual with a great app idea, there’s likely a no-code platform that fits your needs.

Comparing No-Code Mobile Platforms: GoodBarber and Beyond Read More »

Embracing AI in Cyberdefense: Practical Tips for Successful Adoption

Artificial Intelligence (AI) is often seen as a double-edged sword in the realm of cybersecurity. While it can be a formidable ally in defending against cyber threats, it also presents new challenges and risks. A recent report by GetApp highlights the growing recognition among IT professionals of AI’s potential in cyberdefense and provides practical tips for its successful adoption. Let’s delve into the key insights from this report and explore how organizations can effectively integrate AI into their cybersecurity strategies.

The Growing Role of AI in Cyberdefense

According to the report, a significant majority of IT and data security professionals view AI as more of an ally than a threat. Specifically, 64% of U.S. respondents see AI as a beneficial tool in their cybersecurity arsenal. This positive sentiment is driven by AI’s capabilities in areas such as network traffic monitoring, threat detection, and automated response.

Key Benefits of AI in Cybersecurity

1. Enhanced Threat Detection: AI can analyze vast amounts of data in real-time, identifying anomalies and potential threats that might go unnoticed by human analysts. This capability is crucial for early detection and mitigation of cyber attacks.
2. Automated Response: AI can automate routine tasks and responses to common threats, freeing up human resources to focus on more complex issues. This not only improves efficiency but also reduces the time taken to respond to incidents.
3. Predictive Analytics: By leveraging machine learning and deep learning algorithms, AI can predict potential vulnerabilities and threats, allowing organizations to proactively strengthen their defenses.

Practical Tips for AI Adoption in Cyberdefense

1. Plan Around AI’s Strengths: Organizations should set clear goals for AI deployment, focusing on areas where AI can provide the most value, such as threat detection and prevention. This involves understanding the specific cyber threats faced by the organization and how AI can address them.

2. Prioritize Human-in-the-Loop (HITL) Approaches: While AI can automate many tasks, human oversight remains crucial. HITL approaches ensure that AI systems are guided and monitored by human experts, enhancing their effectiveness and reliability.

3. Get Data AI-Ready: The effectiveness of AI in cybersecurity depends heavily on the quality of data it is trained on. Organizations should invest in data preparation, ensuring that their datasets are comprehensive, accurate, and relevant to the threats they aim to mitigate.

Challenges and Considerations

Despite its potential, the adoption of AI in cybersecurity is not without challenges. Key obstacles include:

Skill Gaps: There is a shortage of professionals skilled in both AI and cybersecurity, which can hinder effective implementationh.

Data Privacy: Ensuring that AI systems comply with data privacy regulations is critical, as mishandling sensitive information can lead to significant legal and reputational risks.

Trust and Transparency: Building trust in AI systems requires transparency in how they operate and make decisions. Organizations must ensure that their AI tools are explainable and accountable.

Conclusion

AI holds immense promise for enhancing cybersecurity, offering advanced capabilities in threat detection, automated response, and predictive analytics. However, successful adoption requires careful planning, human oversight, and robust data management. By following the practical tips outlined in the GetApp report, organizations can harness the power of AI to build more resilient and proactive cyber defenses.

Embracing AI in Cyberdefense: Practical Tips for Successful Adoption Read More »

Machine Learning for Network Security, Detection and Response

Cybersecurity is the defense mechanism used to prevent malicious attacks on computers and electronic devices. As technology becomes more advanced, it will require more complex skills to detect malicious activities and computer networks’ flaws. This is where machine learning can help.

Machine learning is a subset of artificial intelligence that uses algorithms and statistical analysis to make assumptions about a computer’s behavior. It can help organizations address new security challenges, such as scaling up security solutions, detecting unknown and advanced attacks, and identifying trends and anomalies. Machine learning can also help defenders more accurately detect and triage potential attacks, but it may bring new attack surfaces of its own.

Machine learning can be used to detect malware in encrypted traffic, find insider threat, predict “bad neighborhoods” online, and protect data in the cloud by uncovering suspicious user behavior. However, machine learning is not a silver bullet for cybersecurity. It depends on the quality and quantity of the data used to train the models, as well as the robustness and adaptability of the algorithms.

A common challenge faced by machine learning in cybersecurity is dealing with false positives, which are benign events that are mistakenly flagged as malicious. False positives can overwhelm analysts and reduce their trust in the system. To overcome this challenge, machine learning models need to be constantly updated and validated with new data and feedback.

Another challenge is detecting unknown or zero-day attacks, which are exploits that take advantage of vulnerabilities that have not been discovered or patched yet. Traditional security solutions based on signatures or rules may not be able to detect these attacks, as they rely on prior knowledge of the threat. Machine learning can help to discover new attack patterns or adversary behaviors by using techniques such as anomaly detection, clustering, or reinforcement learning.

Anomaly detection is the process of identifying events or observations that deviate from the normal or expected behavior of the system. For example, machine learning can detect unusual network traffic, login attempts, or file modifications that may indicate a breach.

Clustering is the process of grouping data points based on their similarity or proximity. For example, machine learning can cluster malicious domains or IP addresses based on their features or activities, and flag them as “bad neighborhoods” online.

Reinforcement learning is the process of learning by trial and error, aiming to maximize a cumulative reward. For example, machine learning can learn to optimize the defense strategy of a system by observing the outcomes of different actions and adjusting accordingly.

Machine learning can also leverage statistics, time, and correlation-based detections to enhance its performance. These indicators can help to reduce false positives, identify causal relationships, and provide context for the events. For example, machine learning can use statistical methods to calculate the probability of an event being malicious based on its frequency or distribution. It can also use temporal methods to analyze the sequence or duration of events and detect anomalies or patterns. Furthermore, it can use correlation methods to link events across different sources or domains and reveal hidden connections or dependencies.

Machine learning is a powerful tool for cybersecurity, but it also requires careful design, implementation, and evaluation. It is not a one-size-fits-all solution, but rather a complementary approach that can augment human intelligence and expertise. Machine learning can help to properly navigate the digital ocean of incoming security events, particularly where 90% of them are false positives. The need for real-time security stream processing is now bigger than ever.

Machine Learning for Network Security, Detection and Response Read More »

Predictive Networks: is it the Future?

Post-chatGPT Update as of May 26th, 2023:
Cisco and their EVP Liz Centoni have probably never been so wrong before in their useless predictions!

“Predictive Network” is a cool term but it goes down to some things that Cisco EVP Liz Centoni does not consider cool or trending anymore: Artificial Intelligence (AI) and Machine Learning (ML), which collect and analyze millions of network events, delivering problem-solving solutions. AI-based Predictive Networks, that by the way, are one of Liz’s 2023 “trends” predictions are contradicting her statement that

The cloud and AI are no longer frontiers

Obviously, Cisco’s EVP and Chief Strategy Officer Centoni refers to Cisco’s own Predictive Network product which, quoting Cisco now

 rely on a predictive engine in charge of computing (statistical, machine learning) models of the network using several telemetry sources

So how exactly AI is “no longer the frontier” Liz, if machine learning powers Predictive Networks that you predict to become a 2023 trend?

Predictive Networks: is it the Future? Read More »