Nybble

Nybble Leader server configuration.

Nybble configuration

Nybble Leader configuration can be modified by editing the "config.properties" file in "$NYBBLE_HOME/conf" folder.

Sigma

The default Sigma rules folder is "$NYBBLE_HOME/rules". To specify another rule folder, edit:

sigma.rules.folder=$Path_To_Rule_Folder

The default Sigma global map path is "$NYBBLE_HOME/mapping/sigma_global_ECS_map.json". If you have created your own Sigma global map in another location, edit:

sigma.global.map=$Path_To_Global_Map

Nybble automatically create a map file for each Sigma rules in rules folder. By default Sigma rules map files are stored in "$NYBBLE_HOME/mapping/sigma-maps/". To specify another rules map folder, edit:

sigma.maps.folder=$Path_To_Rule_Map_Folder

Ensure that "nybble" service user has the permissions to read and write in the specified folders.

Kafka

Kafka configuration section contains parameters for Kafka consumer which will be used to get the events/logs.

Kafka bootstrap

Configure Kafka broker used to consume events:

kafka.bootstrap.servers=localhost:9092

Can be a comma separated list of tuple host:ip. (host1:9092,host2:9092,...)

Specify the name of the consumer group the Kafka consumer belongs to:

kafka.group.id=nybble_kafka_consumer

Specify the name of topics to consume. There is two away to provide topics name:

Topics name as comma separated list:

kafka.topic.name.list=windows-logs,linux-logs,zeek-logs

Topics name as pattern:

kafka.topic.name.regex=.*-logs$

Kafka Consumer SSL Configuration

If Kafka broker is configured to use SSL, the following parameters need to be set in order to allow the Kafka consumer get events:

# Enable SSL.
kafka.security.protocol=SSL
# Values should reflect the configuration on Kafka Broker side.
kafka.ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
# Specify TrustStore location.
kafka.ssl.truststore.location=./security/nybble.truststore.jks
# Specify TrustStore password.
kafka.ssl.truststore.password=changeit
# Optionnaly, TrustStore type can be specified.
kafka.ssl.truststore.type=

Flink need to be configured properly when SSL is enabled. Refer to Flink configuration - Java options section.

If client authentication has been configured as well on Kafka broker, the following parameters need to be set:

# Specify Nybble client KeyStore location.
kafka.ssl.keystore.location=./security/nybble-client.keystore.jks
#Specify Nybble client KeyStore password.
kafka.ssl.keystore.password=changeit
# Specify Key password for key in Nybble client KeyStore.
kafka.ssl.key.password=changeit
# Optionnaly, KeyStore type can be specified.
kafka.ssl.keystore.type=

Elasticsearch

Elasticsearch configuration section contains parameters for Elasticsearch sinks which will be used to send processed events and alerts.

Elasticsearch bootstrap

Configure Elasticsearch host used to store processed events and alerts:

# Elasticsearch server address (Default value, localhost)
elasticsearch.host=es-node1.nybble.local
# Elasticsearch server port (Default value, 9200)
elasticsearch.port=9200

Configure Elasticsearch index name for events and alerts (By default, new indexes are created each day):

# Elasticsearch index name for security events (Default value, events-)
elasticsearch.event.index=events-
# Elasticsearch index name for alerts (Default value, alerts-)
elasticsearch.alert.index=alerts-

Elasticsearch performance settings

Configure REST client timeout, can be useful if Elasticsearch instance is low in resources or in Cloud:

# Set REST Client RequestTimeOut (in Milliseconds. Default value, 60000)
elasticsearch.rest.request.timeout=60000
# Set REST Client ConnectTimeOut (in Milliseconds. Default value, 30000)
elasticsearch.rest.connect.timeout=30000
# Set REST Client SocketTimeOut (in Milliseconds. Default value, 60000)
elasticsearch.rest.socket.timeout=60000

Configure number of elements that will be buffered before being emitted by the Elasticsearch Sink (1 mean no buffer):

# Number of element buffered before emit for event indexes. (Default value, 1)
elasticsearch.event.bulkflushmaxactions=1
# Number of element buffered before emit for alert indexes. (Default value, 1)
elasticsearch.alert.bulkflushmaxactions=1

Configure the Elasticsearch Sinks parallelism. Parallelism values represent the number of tasks that will be allocated to Elasticsearch sinks:

# Set ElasticsearchSink parallelism for Alert stream.
elasticsearch.alert.streamparallelism=4
# Set ElasticsearchSink parallelism for Event stream.
elasticsearch.event.streamparallelism=6

Parallelism values should not be higher than total number of cores of the Flink Cluster.

Elasticsearch sink SSL configuration

If Elasticsearch instance is configured to use SSL, the following parameters need to be set in order to allow the Elasticsearch sinks to send alerts and events:

# Enable HTTPS to connect for Elasticsearch instance connection.
elasticsearch.proto=https
# Specify TrustStore location.
elasticsearch.truststore.path=./security/nybble.truststore.jks
# Specify TrustStore password.
elasticsearch.truststore.password=changeit
# Enable authentication on Elasticsearch API
elasticsearch.auth.enable=true
# Elasticsearch Username for authentication
elasticsearch.username=nybble-sink
# Elasticsearch Password for authentication
elasticsearch.password=changeit

Flink need to be configured properly when SSL is enabled. Refer to Flink configuration - Java options section.

Elasticsearch user must have enough permissions to write in indexes.

MISP

Enable MISP enrichment:

# Set value to true to enable MISP enrichment.
misp.enrichment.enable=true

Configure MISP instance IP/Host and API protocol (Default MISP installation use HTTPS):

# MISP instance address.
misp.host=nybble-security.nybble.local
# Enable SSL for connection to MISP instance. (SSL is used by default)
misp.ssl.enable=true

MISP certificates need to be imported in default Java Keystore or custom Keystore if you configured one.

Create PKCS12 Keystore containing default MISP installation certificate:

openssl pkcs12 -export -in /etc/pki/tls/certs/misp.local.crt -inkey /etc/pki/tls/private/misp.local.key -out misp.p12 -name $Your_Server_Common_Name

Then import the created PKCS12 into default Java cacerts Keystore or custom another custom Keystore:

keytool -importkeystore -srckeystore misp.p12 -srcstoretype PKCS12 -destkeystore "/etc/java/java-11-openjdk/java-11-openjdk-11.0.7.10-1.el8_1.x86_64/lib/security/cacerts" -deststoretype JKS

Configure the Automation Key for API access:

misp.automation.key=$Automation_API_Key

To generate an Automation Key, log in to MISP WebUI, go to Home and click on Automation in the menu on the right.

The default MISP map file is "$NYBBLE_HOME/MISPMaps/event_attributes_map.json". To specify another map file, edit:

misp.map=./MISPMaps/event_attributes_map.json

MISP map file is used to map fields name from events to MISP attributes fields for API request.

More details on MISP Map file in Mapping files section.

More information about MISP Data models: https://www.misp-project.org/datamodels/

Redis

Redis is used as cache for MISP and DNS entries. Redis need to be installed on each TaskManager of Nybble infrastructure and must used the same parameters.

Redis bootstrap

Configure Redis server host and port:

# Redis server IP or DNS name (Default localhost)
redis.server.host=localhost
# Redis server port (Default 6379)
redis.server.port=6379

Configure Redis Database ID for DNS and MISP cache:

# Redis MISP Cache Database ID. (Value from 0 to 15, default is 0).
redis.misp.cache.id=0
# Redis DNS Cache Database ID. (Value from 0 to 15, default is 1).
redis.dns.cache.id=1

Configure key expiration in for Redis databases.

# Redis MISP cache key expiration (in Seconds, default is 84600).
redis.misp.key.expire=86400
# Redis DNS cache key expiration (in Seconds, default is 84600).
redis.dns.key.expire=86400

Key expiration is needed to avoid out of date information in caches. When a key expire, MISP or DNS are requested again to get the latest value for the key.

Redis performance

Configure Redis connection timeout:

# Redis server connection timeout (in Milliseconds, default 3000)
redis.server.connection.timeout=3000

Configure Redis IO and Compute threads pool:

# Redis I/O Thread pool size is the number of threads that can be allocated for Redis I/O.
redis.io.threads=8
# Redis Computation Thread pool size is the number of threads that can be allocated for Redis I/O.
redis.compute.threads=8

Minimum recommended value for Redis IO and Compute threads pool is 3. A pool with fewer threads can cause undefined behavior.

Optimal value should be equal to number of processors on server.

Flink configuration

Some Flink configuration are needed to run Nybble job. Flink configuration can be modified by editing the "flink-conf.yaml" file in "$NYBBLE_HOME/flink/conf" folder.

Memory

Allocate more Heap Memory for the Flink Job Manager:

jobmanager.heap.size: 1024m

Besides Heap Memory, Direct Memory for Task Manager also need to be allocated.

Allocate more Direct Memory for Task Manager (This parameter is not in the default Flink configuration default):

taskmanager.memory.task.off-heap.size: 512m

Java options

Set Nybble Home environment variable for Job and Task Manager (These parameters are not in the default Flink configuration):

# Environment variables passed to Job and Task Manager containers
containerized.master.env.NYBBLE_HOME: NYBBLE_HOME_HERE
containerized.taskmanager.env.NYBBLE_HOME: NYBBLE_HOME_HERE

Values for these two parameters are set during installation by "install.sh" script.

If SSL is configure for Kafka and/or Elasticsearch, Truststore path and password need to be specified as Java options. Add the following line:

# Add custom Truststore path and password for SSL connection to Kafka and Elasticsearch
env.java.opts: "-Djavax.net.ssl.trustStore=/opt/nybble/security/nybble.truststore.jks -Djavax.net.ssl.trustStorePassword=$Your_Password"