Settings

The Arkime system utilizes configuration files using INI format by default. With the release of Arkime 5, support was introduced for configuration files in JSON and YAML formats using json or yaml extensions, respectively. When utilizing JSON or YAML, arrays can be specified either natively or using separators like INI. The configuration file location can be specified using the -c command-line option.

Debugging & Logging

The are two methods for enabling debugging to Arkime applications, either the command line --debug option or a setting debug=N in the configuration file. If any command line options are present, then the configuration file is ignored. To increase the debug level on the command line use multiple options, so for example --debug --debug will set the debug level to 2, so will debug=2 in the configuration file.

Setting Default Description
accessLogFile EMPTY If left empty, then the viewer will log to stdout. It can be set to a filename and then logging will be directed there.
accessLogFormat :date :username %1b[1m:method%1b[0m %1b[33m:url%1b[0m :status :res[content-length] bytes :response-time ms Set the log record format -- this uses Express Morgan. The string is URL Encoded and so uses %xx to escape special characters. See the Morgan documentation for the list of keywords.
accessLogSuppressPaths EMPTY This is a semi-colon seperated list of URL paths which should not be logged. Setting this to /eshealth.json will suppress logging of all calls to that endpoint.
debug 0 The debug level to use if NO --debug options are given. The higher the number, the more information is logged.

Web Settings

For Arkime applications that have a web interface there are common settings for listening for web traffic.

Setting Default Description
certFile EMPTY Public certificate to use for https, if not set then http will be used. keyFile must also be set.
keyFile EMPTY Private certificate to use for https, if not set then http will be used. certFile must also be set.
webBasePath / When configuring Arkime behind a reverse proxy, it's essential to specify the webBasePath to ensure correct request routing and cookie management. For instance, if Arkime is accessed via https://example.com/arkime, set webBasePath=/arkime/ to direct requests appropriately. This setting must include a trailing slash (/) to avoid issues. It's also recommended to strip this base path in the reverse proxy configuration to ensure seamless integration. Note: Avoid setting webBasePath in the [default] section for viewer configurations; it should only be applied in the node-specific sections.

Auth & Security Settings

Arkime applications have common authorization and security settings.
These settings should live in the [cont3xt], [parliament], [wiseService] or [default] sections depending on the Arkime application.

Setting Default Description
authCookieSameSite Lax (Since 5.0.0) For auth cookies (form, oidc, saml) what value is used for sameSite.
authCookieSecure true (Since 5.0.0) This setting controls if authentication cookies are marked as "secure". If using cookie based auth (form, oidc, saml) over http change this to false.
authMode EMPTY (Since 5.0.0) This setting controls what auth mode is used, if authMode or userNameHeader isn't set then digest is used by default. Before Arkime 5 userNameHeader is used to select the auth mode. Possible values are:
  • "basic" - Use basic auth where browser based basic authentication is used.
  • "basic+oidc" - Use basic auth when present otherwise use oidc auth. This is useful when you want to support API calls using basic auth. WARNING - the password for basic auth is from Arkime which isn't synced with oidc.
  • "basic+form" - Use basic auth when present otherwise use form auth. This is useful when you want to support API calls using basic auth.
  • "digest" or EMPTY - Use digest auth where browser based digest authentication is used.
  • "form" - Use a html form to enter the user/password.
  • "oidc" - Use OIDC authentication.
  • "anonymous" - Use anonymous authentication.
  • "s2s" - Only server to server authentication is allowed
If using central viewers for maximum security set all viewers to use s2s except the central viewer. Each user must have Web Auth Header checkbox set to support other methods besides digest.

For authentication modes that require cookies (form, oidc, saml) you may need to also change:
  • authCookieSecure - Set to false if using http WITHOUT a reverse proxy. This is insecure.
  • authTrustProxy - Set to true if using http WITH a https reverse proxy.
authTrustProxy EMPTY (Since 5.0.0) Used to set the express "trust proxy" setting that might be needed if viewer is running in http mode and a reverse proxy is being used in https mode. Please read more about the setting and possible values.
caTrustFile EMPTY Optional file with PEM encoded certificates to use when validating certs. Make sure to read this FAQ entry.
dropGroup EMPTY Group to drop privileges to. The pcapDir must be writable by this group or to the user specified by dropUser
dropUser EMPTY User to drop privileges to. The pcapDir must be writable by this user or to the group specified by dropGroup
httpRealm Moloch HTTP Digest Realm - Used by digest mode AND for encoding user passwords. Changing the value will cause all previous stored passwords to no longer work.
loginMessage EMPTY An optional login message to present on the login page when using authMode=form
logoutUrl Only set for form authMode (Since 5.0.0) Show a logout button in the UI that will take the user to a page to logout. For form authMode we provide a working version.
requiredAuthHeader EMPTY Used for allowing an external system like LDAP or Active Directory to manage user provisioning and activation/deactivation. It is assumed that the header contains a list of user roles (like active directory groups) which are inspected against the value in requiredAuthHeaderVal (see below) to verify that the user is in the appropriate group (ie. "ArkimeUsers"). If so, the user is authorized to use the system, and if an account does not already exist for them in the Arkime user store, it is created (see userAutoCreateTmpl)
requiredAuthHeaderVal EMPTY See requiredAuthHeader for more information.
serverSecret Value of passwordSecret The server-to-server shared key. All viewers in the Arkime cluster must have the same value. It should be changed periodically.
userAuthIps For header auth 127/8 and ::1, otherwise all ips (Since 3.4.0) A comma separated list of Ips allowed to be used for authenticated calls
userAutoCreateTmpl EMPTY When using requiredAuthHeader to externalize provisioning of users to a system like LDAP/AD, this configuration parameter is used to define the JSON structure used to automatically create a Arkime user in the Arkime users database if one does not exist. The user will only be created if the requiredAuthHeader includes the expected value in requiredAuthHeaderVal, and is not automatically deleted if the auth headers are not present. Values can be populated into the creation JSON to dynamically populate fields into the user database, which are passed in as HTTP headers along with the user and auth headers. The example value below creates a user with a userId pulled from the http_auth_http_user HTTP header with a name pulled from the http_auth_mail user header. It is expected that these headers are passed in from an apache (or similar) instance that fronts the Arkime viewer as described in the documentation supporting userNameHeader {"userId": "${this.http_auth_http_user}", "userName": "${this.http_auth_mail}", "enabled": true, "webEnabled": true, "headerAuthEnabled": true, "emailSearch": true, "createEnabled": false, "removeEnabled": false, "packetSearch": true, "roles": ["arkimeUser", "cont3xtUser"]}
userNameHeader EMPTY If using Arkime 5 or later please use authMode to select the auth mode and only use userNameHeader in header auth mode. Arkime 6 will no longer support this setting, except for the header name. Before Arkime 5 this setting controls what auth mode is used OR what header to user when using header auth. Possible values are:
  • "digest" - Run in digest mode where browser based authentication is used.
  • "oidc" - (Since 4.2.0) Use OIDC authentication.
  • "anonymous" - (Since 4.2.0) Use anonymous authentication.
  • "s2s" - (Since 4.2.0) Only server to server authentication is allowed
  • Any other value - The lowercase http header key to use for determining the user id. It is recommended you set viewHost to localhost when using this setting, or use iptables, otherwise a hacker can just send this header.
If using central viewers for maximum security set all viewers to use s2s except the central viewer. Each user must have Web Auth Header checkbox set to support other methods besides digest.

Auth OIDC Settings

(Since 4.2.0) Arkime now support direct OIDC authentication when setting userNameHeader to oidc. Make sure that each user has the "Web Auth Header" checkbox selected.
These settings should live in the [cont3xt], [parliament], [wiseService] or [default] sections depending on the Arkime application.

Setting Default Description
authClientId EMPTY The OIDC client id
authClientSecret EMPTY The OIDC client secret
authDiscoverURL EMPTY The OIDC discover wellknown URL.
authOIDCScope openid The OIDC scope
authRedirectURIs EMPTY Comma separated list of redirect URLs. Maybe should end with /auth/login/callback
authUserIdField EMPTY The field to use in the response from OIDC that contains the userId

Example:

authDiscoverURL=[DISCOVER or ISSUER or WELLKNOWN URL]
authClientId=[CLIENTID]
authClientSecret=[CLIENTSECRET]
authUserIdField=preferred_username
authRedirectURIs=http://ARKIMEHOST:PORT/auth/login/callback
# Optional to auto create users, make sure userId/userName variables are right
#userAutoCreateTmpl={"userId": "${this.preferred_username}", "userName": "${this.name}", "enabled": true, "webEnabled": true, "headerAuthEnabled": true, "emailSearch": true, "createEnabled": false, "removeEnabled": false, "packetSearch": true, "roles": ["arkimeUser", "cont3xtUser"] }

Users Database

The Arkime applications store all the users in OpenSearch/Elasticsearch. Since you will have multiple Arkime clusters and applications, Arkime allows you to share the database to use. When these settings aren't use Arkime will fallback to using a non shared Users Database for viewer/capture.
These settings should live in the [cont3xt], [parliament], [wiseService] or [default] sections depending on the Arkime application.

Setting Default Description
passwordSecret password Password hash secret - All Arkime applications sharing the same users database must have the same value. Since OpenSearch/Elasticsearch used to be wide open by default, we encrypt the stored password hashes with this so a malicious person can't insert a working new account. It is also used for secure server-to-server communication if serverSecret is not set a different value. Previously not setting would disable user authentication, with Arkime 5 you should use authMode=anonymous. Changing the value will make all previously stored passwords no longer work.
usersElasticsearch EMPTY Set this option to a shared OpenSearch/Elasticsearch cluster to use for the users index. This allows multiple Arkime clusters to share the same users index, so that the same accounts and settings work across all Arkime clusters. The elasticsearch setting should use a unique OpenSearch/Elasticsearch cluster per Arkime cluster and the usersElasticsearch setting should use a single shared OpenSearch/Elasticsearch cluster.
A comma separated list of urls to use to connect to the users OpenSearch/Elasticsearch cluster is supported. If OpenSearch/Elasticsearch requires a user/password those can be placed in the url also, http://user:pass@hostname:port or use usersElasticsearchBasicAuth
(Since 3.0) This setting applys to shortcuts too! Shortcuts will be saved in the usersElasticsearch. Note: cronQueries also has to be set in order for shortcuts to be saved to the usersElasticsearch and synced to each cluster.
usersElasticsearchAPIKey EMPTY Use an Elasticsearch API key for users DB access without requiring basic authentication. See elasticsearchAPIKey setting for information on creating and encoding an Elasticsearch API Key. See usersElasticsearch setting for information about the Users DB.
usersElasticsearchBasicAuth EMPTY Use basic auth with OpenSearch/Elasticsearch for Users DB. See elasticsearchBasicAuth setting for information on creating and encoding an OpenSearch/Elasticsearch Basic Auth. See usersElasticsearch setting for information about the Users DB. All Arkime versions also support http://user:pass@hostname:port in the usersElasticsearch setting.
usersPrefix [PREFIX] if prefix set otherwise EMPTY Like prefix but only for the users information if usersElasticsearch set.

Capture/Viewer

The default confguration file for capture and viewer is /opt/arkime/etc/config.ini. When searching for a setting, capture and viewer employ a tiered system for configuration variables. This tiered system enables a single configuration file to be used across multiple servers. The order of sections within the configuration file is not significant; however, the specific section in which a variable resides can be crucial. A frequent issue encountered is the misplacement of a variable in an incorrect section.

Order of config variables for capture and viewer:

  1. [optional] The section titled with the node name is given priority and used first. Arkime will always tag sessions with node: <node name>.
  2. [Optional] If a node section possesses a nodeClass variable, the section titled with the nodeClass name is utilized next. Sessions will be tagged with node:<node class name>, which is beneficial for monitoring different network classes.
  3. The section titled "default" is considered last.

OpenSearch/Elasticsearch Settings

Arkime leverages OpenSearch/Elasticsearch both as a database and as time-based storage for all saved sessions. The settings pertain to the communication between Arkime and OpenSearch/Elasticsearch. Although the terms "elasticsearch" and "es" appear in the variable names, both OpenSearch and Elasticsearch are compatible with most settings.

Arkime does NOT support having pcapDir and the OpenSearch/Elasticsearch data directory on the same file system. Arkime will NOT work in this configuration. It is strongly recommended running capture and OpenSearch/Elasticsearch on different hosts, if that is not possible, use different disks or file systems.

Setting Default Description
autoGenerateId false (Since 1.6) Use OpenSearch/Elasticsearch auto generated ids
compressES true (since 4.0.0)
false
Compress requests TO OpenSearch/Elasticsearch, reduces bandwidth to OpenSearch/Elasticsearch by ~80% at the cost of increased CPU. This doesn't control responses FROM OpenSearch/Elasticsearch.
dbBulkSize 20000 Size of indexing request to send to OpenSearch/Elasticsearch. Increase if monitoring a high bandwidth network.
dbEsHealthCheck true Perform an OpenSearch/Elasticsearch health check every 30 seconds and report on slowness or if not green. For big clusters this should be disabled.
dbFlushTimeout 5 Number of seconds before we force a flush to OpenSearch/Elasticsearch
elasticsearch http://localhost:9200 A comma separated list of urls to use to connect to the OpenSearch/Elasticsearch cluster. If OpenSearch/Elasticsearch requires a user/password those can be placed in the url also, http://user:pass@hostname:port or use elasticsearchBasicAuth While not required, if not using a load balancer, a different OpenSearch/Elasticsearch node can be specified for each Arkime capture node.
elasticsearchAPIKey EMPTY (Since 3.0.0) Use an Elasticsearch API key for access without requiring basic authentication. Once you have created an API Key, you must base64 encode the id and api_key joined by a colon. echo -n "id:api_key" | base64 is one way to generate the base64 key. Notice the -n, you have to make sure you don't encode an extra newline.
elasticsearchBasicAuth EMPTY (Since 3.1.0) Use basic auth with OpenSearch/Elasticsearch. The value can either be the plain text "user:pass" or the base64 encoded version. One way to generate base64 echo -n "username:password" | base64 version. Notice the -n, you have to make sure you don't encode an extra newline. All Arkime versions also support http://user:pass@hostname:port in the elasticsearch setting.
elasticsearchTimeout 300 Approximate timeout for most requests to OpenSearch/Elasticsearch. OpenSearch/Elasticsearch will automatically cancel any request after this expires.
esAdminUsers EMPTY A comma separated list of users that are allowed to use the ES Admin stats tab. This tab allows the user to change several of the OpenSearch/Elasticsearch settings from the UI.
esClientCert EMPTY (Since 2.0) The public key file to use for tls client authentication with OpenSearch/Elasticsearch. Must also set esClientKey.
esClientKey EMPTY (Since 2.0) The private key file to use for tls client authentication with OpenSearch/Elasticsearch. Must also set esClientCert.
esClientKeyPass EMPTY (Since 2.0) The password for the esClientKey setting.
maxESConns 20 Max number of connections to OpenSearch/Elasticsearch from capture process
maxESRequests 500 Max number of OpenSearch/Elasticsearch requests outstanding in queue
prefix EMPTY It is possible for multiple Arkime clusters to use the same OpenSearch/Elasticsearch cluster by giving each Arkime cluster a different prefix value. The prefix value will be used in all index names that Arkime creates.
rotateIndex daily Specifies how often to create a new index in OpenSearch/Elasticsearch. Use daily or a form of hourly for busy live instances, use weekly or monthly for research instances. When choosing a value, the goal is to have the avg shard be between 50G - 150G. Prior to 1.5.0 changing the value will cause previous sessions to be unreachable through the interface, since 1.5.0 you can set queryAllIndices if you need to change the value. Prior to 1.5.0 if using the multi viewer then all Arkime clusters must have the same value.
Possible values are: hourly, daily, weekly, monthly.
1.5.0 added hourly2, hourly3, hourly4, hourly6, hourly8, hourly12 which will bucket N number of hours together. So hourly3 for example will make it so each shard has 3 hours of data. hourly1 would be the same as hourly and hourly24 would be the same as daily.

PCAP Storage Settings

Arkime does NOT support having pcapDir and the OpenSearch/Elasticsearch data directory on the same file system. Arkime will NOT work in this configuration. It is strongly recommended running capture and OpenSearch/Elasticsearch on different hosts, if that is not possible, use different disks or file systems.

Setting Default Description
freeSpaceG 5% Delete pcap files when free space is lower then this. This does NOT delete the session records in the database. It is recommended this value is between 5% and 10% of the disk. Database deletes are done by the db.pl expire. Can also be specified using a percentage.
pcapDir EMPTY Semicolon separated list of directories to save pcap files to. The directory to save pcap to is picked using round robin by default, see pcapDirAlgorithm for more options. It is important that ALL parent directories of the pcapDir have the execute bit set so that either the dropUser or dropGroup can enter the directory. The actual pcapDir should either be owned by the dropUser or have the group dropGroup and have write permission.
pcapDirAlgorithm round-robin When pcapDir is a list of directories, this determines how Arkime chooses which directory to use for each new pcap file.
Possible values: round-robin (rotate sequentially), max-free-percent (choose the directory on the filesystem with the highest percentage of available space), max-free-bytes (choose the directory on the filesystem with the highest number of available bytes).
pcapDirTemplate EMPTY When set, this strftime template is appended to pcapDir and allows multiple directories to be created based on time.

Capture Settings

Setting Default Description
bpf EMPTY The bpf filter used to reduce traffic. Used both on live and file traffic.
disableParsers arp.so Semicolon separated list of parsers to NOT load. Add disableParsers= in config file to load all parsers.
dnsOutputAnswers false (Since 5.1.0) When enabled Arkime will keep track of all the DNS answers separately and display in the session display.
dontSaveBPFs EMPTY Semicolon ';' separated list of bpf filters which when matched for a session causes the remaining pcap from being saved for the session. It is possible to specify the number of packets to save per filter by ending with a :num. For example dontSaveBPFs = port 22:5 will only save 5 packets for port 22 sessions. Currently only the initial packet is matched against the bpfs.
dontSaveTags EMPTY Semicolon ';' separated list of tags which once capture sets for a session causes the remaining pcap from being saved for the session. It is likely that the initial packets WILL be saved for the session since tags usually aren't set until after several packets. It is possible to specify the number of packets to save per filter by ending with a :num.
ecsEventDataset EMPTY (Since 4.0.0) Value for event.dataset in SPI
ecsEventProvider EMPTY (Since 3.3.0) Value for event.provider in SPI
enablePacketLen false (Since 2.0) Index all the packet lengths in OpenSearch/Elasticsearch (Before 2.0) always saved packet lengths.
esBulkQuery /_bulk (Since 2.3.0) The path to send session bulk queries to.
esMaxRetries 2 (Since 2.3.1) How many times to retry OpenSearch/Elasticsearch operations
espTimeout 600 For ESP sessions, Arkime writes a session record after this many seconds of inactivity since the last session save.
filenameOps EMPTY (Since 1.5.0) A semicolon separated list of operations that use the filename. Format is fieldexpr=match%value so if you wanted to set a tag based on part of filenames that start with gre use tags=/gre-(.*)\\.pcap%gretest-\\1. Notice two backslashes are required everywhere you want one because of ini formatting.
fragsTimeout 480 Number of seconds to keep around an ip fragment and try an reassemble it
gapPacketPos true (Since 2.4.0) encode packetPos using a simple gap encoding, this reduces storage in OpenSearch/Elasticsearch
geoLite2ASN /usr/share/GeoIP/GeoLite2-ASN.mmdb;/opt/arkime/etc/GeoLite2-ASN.mmdb A Maxmind account is required to use this feature. We recommend installing and setting up the geoipupdate program included with most Linux releases.
Semicolon ';' separated list of maxmind geoip country files. The first file found will be used. If no files are found a warning will be issued. To disable warning set to a blank string.
Download free version
geoLite2Country /usr/share/GeoIP/GeoLite2-Country.mmdb;/opt/arkime/etc/GeoLite2-Country.mmdb A Maxmind account is required to use this feature. We recommend installing and setting up the geoipupdate program included with most Linux releases.
Semicolon ';' separated list of maxmind geoip country files. The first file found will be used. If no files are found a warning will be issued. To disable warning set to a blank string.
Download free version
icmpTimeout 10 For ICMP sessions, Arkime writes a session record after this many seconds of inactivity since the last session save.
interface EMPTY Semicolon ';' separated list of interfaces to listen on for live traffic.
interfaceOps EMPTY (Since 1.5.0) A semicolon separated list of a comma separted list of operations to set for each interface. The semicolon list must have the same number of elements as the interface setting. The format is fieldexp=value. So for example if you have interface=eth1;eth2 you could set a tag with interfaceOps=tags=eth1;tags=eth2.
ja3Strings false (Since 2.0) Index the raw JA3 strings before hashing them
magicMode both libfile can be VERY slow. Less accurate "magicing" is available for http/smtp bodies:
  • libmagic - normal libmagic
  • libmagicnotext - libmagic, but turns off text checks
  • molochmagic - (removed in 1.5.0) subset of libmagic input files, and less accurate
  • both - (since 1.5.0) try basic and then libmagic
  • basic - 50+ of most common headers
  • none - no libmagic or basic calls
maxFileSizeG 12 (since 4.0.0)
4 (before 4.0)
The max raw pcap file size in gigabytes. The disk should have room for at least 10*maxFileSizeG files.
maxFileTimeM 0 The max time in minutes between rotating pcap files. Useful if there is an external utility that needs to look for closed files every so many minutes. Setting to 0 means only use maxFileSizeG
maxFrags 10000 Max number of ip fragment packets to save and try and match at once
maxMemPercentage 100 If capture exceeds this amount of memory it will log an error and send itself a SIGSEGV that should produce a core file.
maxPackets 10000 Arkime writes a session record after this many packets since the last save. Arkime is only tested at 10k, anything above is not recommended.
maxPacketsInQueue 200000 How many packets per packet thread that can be waiting to be processed. Arkime will start dropping packets if the queue fills up.
maxReqBody 256 For HTTP requests, store this many bytes in the http.requestBody field. Can be disabled by setting to 0.
maxStreams 1500000 An aproximiate maximum number of active sessions Arkime will try and monitor
maxTcpOutOfOrderPackets 256 (Since 1.5.0) Max number of tcp packets to track while trying to reassemble the TCP stream
minPacketsSaveBPFs EMPTY Semicolon ';' separated list of bpf filters which when matched for a session have their SPI data NOT saved to OpenSearch/Elasticsearch. PCAP data is still saved however. It is possible to specify the number of min packets required for SPI to be saved by ending with a :num. A use case is a scanning host inside the network that you only want to capture if their is a conversation "tcp and host 10.10.10.10:1".
offlineDispatchAfter pre 2.1 - 5000 (unchangeable)
2.1,2.2 - 1000
2.3 - 2500
How many packets to read from offline pcap files at once.
offlineFilenameRegex (?i)\.(pcap|cap)$ Regexp to control which filenames are processed when using the -R option to capture.
ouiFile EMPTY The mac address lookup for manufactures file Download free version
overrideIpsFiles EMPTY (Since 5.0.0) A list of ini format files that have a [override-ips] section. These files will reload on change without restarting capture.
packetDropIpsFiles EMPTY (Since 5.0.0) A list of ini format files that have a [packet-drop-ips] section. These files will reload on change without restarting capture.
packetThreads 1 Number of threads to use to process packets AFTER the reader has received the packets. This also controls how many packet queues there are, since each thread has its own queue. Basically how much CPU to dedicate to parsing the packets. Increase this if you get errors about dropping packets or the packetQ is over flowing. If using the simple writer, this also controls how many pcap files are open for writing. We recommend about 2 x Gbps. Making this value too large may cause issues with Arkime.
parseCookieValue false Parse HTTP request cookie values, cookie keys are always parsed.
parseDNSRecordAll false Parse a full DNS record (query, answer, authoritative, and additional) and store various DNS information (look up hostname, name server IPs, mail exchange server IPs, and so on) into multiple OpenSearch/Elasticsearch fields. Starting with 5.1.0 please use dnsOutputAnswers instead.
parseHTTPHeaderRequestAll false Parse ALL HTTP request headers not already parsed using the [headers-http-request] section
parseHTTPHeaderResponseAll false Parse ALL HTTP request headers not already parsed using the [headers-http-response] section
parseHTTPHeaderValueMaxLen 1024 (Since 3.2.1) Truncate length for http header values before adding to SPI data.
parseQSValue false Parse HTTP query string values, query string keys are always parsed.
parseSMB true Parse extra SMB traffic info
parseSMTP true Parse extra SMTP traffic info
parseSMTPHeaderAll false Parse ALL SMTP request headers not already parsed using the [headers-email] section
pcapBufferSize 100000 pcap library buffer size, see man pcap_set_buffer_size
pcapReadMethod libpcap Specify how packets are read from network cards:
  • libpcap = Use libpcap
  • pfring = Use pfring directly, requires rootPlugins=reader-pfring.so
  • daq = Use daq, requires rootPlugins=reader-daq.so
  • snf = Use Myricom snf, requires rootPlugins=reader-snf.so
  • tpacketv3 = Use linux tpacketv3 (afpacket) interface
pcapWriteMethod simple Specify how packets are written to disk:
  • simple = what you should probably use
  • simple-nodirect = use this with zfs/nfs
  • s3 = write packets into s3
  • null = don't write to disk at all
pcapWriteSize 262144 Buffer size when writing pcap files. Should be a multiple of the raid 5/xfs stripe size and multiple of 4096 if using direct/thread-direct pcapWriteMethod
plugins EMPTY Semicolon separated list of plugins to load and the order to load them in. Must include the trailing .so
readTruncatedPackets false Capture will try and process truncated packets the best it can. In general it is best to have full packet captures for Arkime to work well.
reqBodyOnlyUtf8 true Only store request bodies that are utf8
rirFile EMPTY Path of the RIR assignments file. Download
rootPlugins EMPTY Semicolon separated list of plugins to load as root and the order to load them in. Must include the trailing .so
rulesFiles EMPTY A semicolon separated list of files that contain Arkime rules. These rules match against fields set and can set other fields or meta data about the sessions. See RulesFormat for the format of the files. Since 1.5.0 rules files are auto reloaded, so no need to restart capture.
saveUnknownPackets EMPTY (Since 1.5.2) Save unknown ether, ip, or corrupt packets into separate files. The files will be created in the first entry of the pcapDir list and named starting with unknown.ether, unknown.ip, or corrupt depending on the packet issue. The files are NOT automatically deleted, so you will need to monitor and delete them. This variable takes a semicolon separated list of the following values (applied in order):
  • all = save all unknown ip and ether packets, but not corrupt packets
  • ip:all = save all unknown ip packets
  • ether:all = save all unknown ether packets
  • ip:N = save all unknown ip packets with type of N
  • -ip:N = don't save all unknown ip packets with type of N
  • ether:N = save all unknown ether packets with type of N
  • -ether:N = don't save all unknown ether packets with type of N
  • corrupt = save all corrupt packets (Since 1.5.3)
sctpTimeout 60 For SCTP sessions, Arkime writes a session record after this many seconds of inactivity since the last session save.
smtpIpHeaders EMPTY Semicolon separated list of SMTP Headers that have ips, need to have the terminating colon ':'
snapLen 16384 The maximum size of a packet Arkime will read off the interface. This can be changed to fix the "Arkime requires full packet captures" error. It is recommend that instead of changing this value that all the card "offline" features are turned off so that capture gets a picture of whats on the network instead of what the capture card has reassembled. For VMs, those features must be turned off on the physical interface and not the virtual interface. This setting can be used when changing the settings isn't possible or desired.
supportSha256 false Generate Sha256 hashes along side of md5 hashes content.
tcpClosingTimeout 5 (Since 4.3.0) Delay before saving tcp sessions after close
tcpSaveTimeout 400 For TCP sessions, Arkime writes a session record after this many seconds since the last session save, no matter if active or inactive.
tcpTimeout 480 For TCP sessions, Arkime writes a session record after this many seconds of inactivity since the last session save.
trackESP false (Since 1.5.0) Add ESP sessions to Arkime, no decoding
udpTimeout 60 For UDP sessions, Arkime writes a session record after this many seconds of inactivity since the last session save.
yara EMPTY A single file where to load Yara rules from. This file will be auto reloaded on changes so you don't need to restart capture.
yaraEveryPacket true When true yara is applied to every tcp/udp packet, otherwise only the first tcp/udp packet in a session is used. Yara is run after the classification Arkime step. Looking at every packet can be resource intensive.
yaraFastMode true Set the Yara Fast Mode flag.

Packet Deduplication Settings

Arkime since version 2.7.1 has supported basic packet deduplication. The deduplication is done early in the pipeline before a packet queue is assigned. Currently only the iphdr + tcphdr or iphdr + udphdr is used to find duplicate packets, ignoring the TTL field. The deduplication will only look back a configured number of seconds.

The packet deduplication is an expensive CPU task, so you should monitor the "Dup Drops/s" column in capture stats to see if worthwhile.

Setting Default Description
dedupPackets 1048575 The approximate number of packets to keep information on per second. Set this number to the max number of packets per second you expect.
dedupSeconds 2 The number of seconds to keep packet information for.
enablePacketDedup true (>= 5.0) false (< 5.0) Enable packet deduplication

Viewer Settings

Setting Default Description
aes256Encryption forced true (since 4.0)
true (since 2.4.0)
false (before 2.4.0)
(Since 2.2.0) Use better encryption when talking between viewer nodes
arkimeWebURL http[s]://hostName:[viewPort]/[webBasePath] (since 3.0) The website URL for Arkime. Used to create links to the Arkime web UI. Must end with a / or bad things will happen. You can include the "http(s)://" or exclude it. If excluded, http/https will be set by determining whether a keyFile and certFile exist.
businessDayEnd EMPTY If both businessDayStart and businessDayEnd are set, it displays a colored background on the sessions timeline graph during business hours. Values are in hours from midnight UTC. For example: 5pm EST would be 21.
businessDayStart EMPTY If both businessDayStart and businessDayEnd are set, it displays a colored background on the sessions timeline graph during business hours. Values are in hours from midnight UTC. For example: 9am EST would be 13.
businessDays 1,2,3,4,5 Displays a colored background on the sessions timeline graph on only these days. businessDayStart and businessDayEnd must be set for these to be of use. Values are comma separted list of days of the week as numbers. The week starts at Sunday = 0 and ends on Saturday = 6. For example: Monday through Friday would be 1,2,3,4,5
certFile EMPTY Public certificate to use for https, if not set then http will be used. keyFile must also be set.
cronQueries false Set on ONE viewer node per cluster, this viewer node will perform all the cron queries and hunts for the cluster. If you do NOT set this on a cluster then hunts and cron queries can be created but will never run.
(Since 3.0) If usersElasticsearch is set, this viewer node will sync shortcuts from the usersElasticsearch to the local OpenSearch/Elasticsearch.
(Since 4.3.1) Add "auto" setting that will self select a node to be the primary Arkime viewer node, when that node dies another node will become the primary Arkime viewer node. Do not use when another node is using "true'.
defaultTimeRange 1 Number (in hours) of the default sessions search time range. This is applied automatically to searches in Arkime if no date query param is defined in the URL.
elasticsearchScrollTimeout 300 How long to wait for OpenSearch/Elasticsearch to respond to queries
esMaxConcurrentShardRequests EMPTY Tells OpenSearch/Elasticsearch how many shards to search at the same time
footerTemplate Arkime v_version_ | arkime.com | _responseTime_ms (Since 5.0) A customizable footer template to display at the bottom of every Arkime page. Use _version_ to display the Arkime version and _responseTime_ to display the response time of the current page.
hstsHeader false From viewer, return a hsts header on responses
huntAdminLimit 10000000 (Since 1.6.0) Do not create hunts for admin users if more then this many sessions will be searched
huntLimit 1000000 (Since 1.6.0) Do not create hunts for non admin users if more then this many sessions will be searched
huntWarn 100000 (Since 1.6.0) Warn users when creating a hunt if more then this many sessions will be searched
iframe deny Used to set the X-Frame-Options header for putting Arkime in an iFrame. Options include "deny", "notallowed", or a URL to allow from.
keyFile EMPTY Private certificate to use for https, if not set then http will be used. certFile must also be set.
maxAggSize 10000 Max number of items in an aggregations to request for unique and export intersection
queryAllIndices false for viewer, true for multiviewer (Since 1.5.0) Always query all indices instead of trying to calculate which ones. Use this if searching across that use different rotateIndex values.
queryExtraIndices EMPTY (Since 5.1) A comma separated list of indices to always query in addition to the normal Arkime indices. This is useful for querying external indices that still use the Arkime schema.
spiDataMaxIndices 1 Specify the max number of indices we calculate spidata for, or set to -1 to disable any max. OpenSearch/Elasticsearch MAY blow up if we allow the spiData to search too many indices.
spiViewCategoryOrder EMPTY (Since 5.0) A comma separated list of categories to be pushed to the top of the SPI View page and the order to show them in. The default is alphabetic (with general ALWAYS at the top). Example: "cert,ldap,quic,tls" would have general at the top, then cert, ldap, quic, tls, and then the rest of the categories in alphabetic order.
titleTemplate _cluster_ - _page_ _-view_ _-expression_
  • _cluster_ = OpenSearch/Elasticsearch cluster name
  • _userId_ = logged in User Id
  • _userName_ = logged in User Name
  • _page_ = internal page name
  • _expression_ = current search expression if set, otherwise blank
  • _-expression_ = " - " + current search expression if set, otherwise blank, prior spaces removed
  • _view_ = current view if set, otherwise blank
  • _-view_ = " - " + current view if set, otherwise blank, prior spaces removed
turnOffGraphDays 30 (Since 4.0.0) Automatically turn off the graph if the query is more then this many days. This is to protect overloading OpenSearch/Elasticsarch.
uploadCommand EMPTY If set uploads from the UI are allowed. An upload saves the file and runs capture on it, so its better to just run capture instead of using upload if you can. Example setting would be /opt/arkime/bin/capture --copy -n {NODE} -r {TMPFILE} -c {CONFIG} {TAGS}.
The following templated values will be filled in for you:
  • NODE - The node name of the viewer that received the upload
  • TMPFILE - The tmp file name uploaded to
  • CONFIG - The config file path used to start viewer
  • TAGS - The tags set with the upload
  • INSECURE-ORIGINALNAME - DO NOT USE!!! The full original name of the uploaded file, using this will make it EASY to take over your host. DO NOT USE!!!
uploadFileSizeLimit EMPTY (Since 2.0) If set, the max size of pcap files that can be uploaded from the UI
uploadRoles arkimeUser (Since 5.0) A user must have this role to be able to upload, the uploadCommand also needs to be set to enable.
valueAutoComplete true for viewer, false for multiviewer Autocomplete field values in the search expression.
viewHost EMPTY The ip used to listen, usually localhost for just the localhost or 0.0.0.0 for all ips. See the host section of https://nodejs.org/docs/latest-v8.x/api/net.html#net_server_listen_port_host_backlog_callback
viewPort 8005 This is both the port that the viewer process listens on AND the port we try to connect to other viewer proceses on when proxing. We recommend all viewers use the same port so it can be set in the [default] section. If viewers can't listen on the same ports then set the one in [default] to the common one, and the one in [NODE] to the special port.
viewUrl http[s]://hostname:[viewport] This shouldn't be needed anymore, but it allows you to over load both the host and port for talking to a remote node. See How do viewers find each other. It is much better to use FQDNs on capture nodes, or start capture with the --host option.
viewerPlugins EMPTY Semicolon separated list of viewer plugins to load and the order to load them in. Must include the trailing .js

Advanced Settings

Setting Default Description
demoMode false Enables demo mode which disables most Settings, Users, Cron, and History UI/APIs. This is what https://demo.arkime.com uses. Demo mode applies to the entire application, and not just a single user. We recommend running 1 viewer in demo mode and another in normal mode that only an admin can access to change settings and such. You can still have multiple accounts in demo mode if desired.
disableUserPasswordUI true (Since 4.1.0) When true hide the Change Password form in the UI for users with Web Auth Header enabled. When false always show the Change Password form.
includes EMPTY Semicolon ';' separated list of files to load for config values. Files are loaded in order and can replace values set in this file or previous files. Setting includes is only supported in the top level config file.
isLocalViewRegExp EMPTY (Since 1.6.1) If the node matches the supplied regexp, then the node is considered local.
nodeClass EMPTY If a [node] section doesn't have a setting, but has a nodeClass value, the [$nodeClass] section will be checked BEFORE [default].
parsersDir /opt/arkime/parsers ; ./parsers Semicolon separated list of directories to load parsers from
pluginsDir EMPTY Semicolon separated list of directories to load plugins from

Capture Debug Settings

Setting Default Description
logESRequests false Write to stdout OpenSearch/Elasticsearch requests
logEveryXPackets 50000 Write to stdout info every X packets. Set to -1 to never log status.
logFileCreation false Write to stdout file creation information
logHTTPConnections true for online captures Log http connection attempts and information
logUnknownProtocols false Write to stdout unknown IP protocols

Arkime Default User Settings

(Since 5.0.2) Must be in the [user-setting-defaults] section in the viewer config. These options override the default Arkime user settings. They are used when a new user is created.

Setting Default Description
connDstField ip.dst:port The default connection page destination node field. This can be any database field name.
connSrcField source.ip The default connection page source node field. This can be any database field name.
detailFormat last The format of the session PCAP.
  • 'last' to use the last selected option
  • 'natural' to use natural format
  • 'ascii' to use ascii format
  • 'hex' to see hex format
manualQuery false The default for the manual query mode. In manual query mode, you must press enter or click search to execute a query.
  • true to enable manual query mode
  • false to disable manual query mode
numPackets last The number of packets to display in the session PCAP.
  • 'last' to use the last selected option
  • '50' to display 50 packets
  • '200' to display 200 packets
  • '500' to display 500 packets
  • '1000' to display 1000 packets
  • '2000' to display 2000 packets
showTimestamps last Display the timestamps in the session PCAP.
  • 'last' to use the last selected option
  • 'on' to display timestamps and packet information
  • 'off' to hide timestamps and packet information
sortColumn firstPacket The default sort column for the sessions page. This can be any database field name.
sortDirection desc The default sort direction for the sessions page.
  • 'asc' for ascending
  • 'desc' for descending
spiGraph node The default graph type for the SPI graph. This can be any database field name.
theme default-theme The default theme for the UI.
  • 'default-theme' for the default theme (dark or light based on the OS settings)
  • 'arkime-light-theme' for the light theme
  • 'arkime-dark-theme' for the dark theme
  • 'purp-theme' for the purple theme
  • 'blue-theme' for the blue theme
  • 'green-theme' for the green theme
  • 'cotton-candy-theme' for the cotton candy theme
  • 'dark-2-theme' for green on black theme
  • 'dark-3-theme' for dark blue theme
timelineDataFilters network.packets;network.bytes;totDataBytes A semicolon separated list of the visible data filter buttons for the sessions timeline. Can have 0-3 values. These can be any database field containing numerical data
timezone local The timezone to use for time values within Arkime.
  • 'local' to use the timezone of the browser
  • 'localtz' to use the timezone of the browser and display it in the UI
  • 'gmt' to use GMT/UTC

Readers/Writers

Capture supports several different methods for reading and writing packets during a live capture. These are selected by setting the and config settings.. We suggest using pcapReadMethod=tpacketv3 and the default pcapWriteMethod=simple for best performance.

For offline pcap processing see the command line options of capture and FAQ - How do I import existing PCAPs?.

Reader - AFPacket - Settings

AFPacket v3 also known as Tpacketv3, is the preferred reader for Arkime and can be used on most 3.x or later kernels. Configure capture to use tpacketv3 as the reader method with pcapReadMethod=tpacketv3 in your configuration file. This is also know as afpacket.

Setting Default Description
tpacketv3BlockSize 2097152 The block size in bytes used for reads from each interface. There are 120 blocks per interface.
tpacketv3ClusterId 8005 (Since 2.0) The cluster id for use with PACKET_FANOUT
tpacketv3NumThreads 2 The number of threads used to read packets from each interface. These threads take the packets from the AF packet interface and place them into the packet queues.

Example:

[default]
pcapReadMethod=tpacketv3
tpacketv3BlockSize=2097152
interface=eth0
tpacketv3NumThreads=2

Reader - DAQ - Settings

To use daq:

Setting Default Description
daqModule pcap The daq module to use
daqModuleDirs /usr/local/lib/daq Directories where the daq modules live

Reader - PF_RING - Settings

Setting Default Description
pfringClusterId 0 The pfring cluster id

Reader - PCAP-Over-Ip - Settings

Since version 2.7.0, Arkime supports processing PCAP-Over-Ip requests. Two modes are support client and server. Client mode will connect to a pcap-over-ip service, server will listen for pcap-over-ip connections. Read more information about PCAP-Over-Ip.

Client mode is set by using pcapReadMethod=pcap-over-ip-client. The interface setting must have a list of hosts and optional ports to connect to.

Server mode is set by using pcapReadMethod=pcap-over-ip-server. The interface setting is ignored and should just be set to dummy.

Setting Default Description
pcapOverIpPort 57012 In server mode this is the port to listen on

Reader - SNF - Settings

Setting Default Description
snfDataRingSize 0 The data ring size to use, 0 means use the SNF default
snfFlags -1 Controls process-sharing (1), port aggregation (2), and packet duplication (3). (Default value uses SNF_FLAGS environment variable)
snfNumProcs 1 (Since 2.0) The number of capture processes listening on the shared interface
snfNumRings 1 Number of rings per interface
snfProcNum 0 (Since 2.0) Which capture process this is if using a shared interface

Reader - TZSP - Settings

Since version 4.1.0, Arkime supports listening for TZSP forwarded packets.. You can enable TZSP support by using pcapReadMethod=tzsp. The interface setting is ignored and should just be set to dummy.

Setting Default Description
tzspPort 37008 What port should capture listen for TZSP traffic on.

Reader - Offline Scheme

Arkime 5 introduces the concept of reader schemes to process offline pcap files from multiple local and remote locations. Support for new schemes can be built into capture/viewer or added with a plugin. When a scheme is used to process an offline pcap file, which scheme is used and information about the file is stored in Arkime files index for viewer to use. The schemes run in a dedicated thread, which will make file processing faster then previous versions. Switching to "scheme" mode vs libpcap-file mode is done either by using -r with any scheme OR the --scheme option. It is all or nothing, once in scheme mode, non scheme supported command line options will be ignored with no warning.

file:///fullpath
Process a file on the local disk
s3://bucket/key
Process a file that lives in s3
s3http(s)://host(:port)/bucket/key
Process a file using s3 protocol but use a certain host, useful with minio for example
http(s)://host(:port)/path
Process a file using http range requests

So for example ./capture -r http://hostname/foo.pcap -r s3://pcapbucket/file.pcap will process 2 files.

Writer - Simple - Settings

Setting Default Description
localPcapIndex false Experimental feature to save the index into PCAPs locally instead of in OpenSearch/Elasticsearch.
simpleCompression zstd (>= 5) gzip (4.2 - 5) (since 4.0.0) The type of seekable compression to use on pcap files. Zstd (don't use before 4.5.1) will has better compression for less cpu than glib. Valid values are: none, gzip, zstd (>= 4.5.1)
simpleCompressionBlockSize 32000 (since 4.0.0) How many bytes of uncompressed data to attempt to compress at once. The larger the value the better the compression, without increasing CPU on capture, but may slow down reads in viewer. Recommend using values between 30000 and 120000.
simpleEncoding EMPTY Arkime support PCAP at Rest Encryption.
simpleFreeOutputBuffers 16 (Since 4.2.0) Max number of disk buffers Arkime keeps around to reduce the number of mmap/munmap.
simpleGzipBlockSize 0 (3.3.0 - 3.4.2) This enables GZip on the packet data. The block size is how much uncompressed data to attempt to compress into a block. The larger the value the better the compression, but the slower the read. Recommend using values between 30000 and 120000.
simpleGzipLevel 6 (since 3.3.0) When GZip is enabled using simpleCompression=gzip, this is the gzip compression level.
simpleKEKId EMPTY Which kek to use, see PCAP at Rest Encryption.
simpleMaxQ 2000 (since 2.0.1) The maximum number of disk queue entries per capture node. Once this limit is hit, packets will not be saved to disk until the disk queue falls below the threshold. The packets will still be processed, so all meta data is still extracted. Sessions that have packets not written to disk will be tagged with pcap-disk-overload. Increasing this value will use more memory when there is disk congestion.
simpleShortHeader false (since 3.3.0) When TRUE use a non standard pcap file format that uses small packet headers. Should only be set when monitoring an interface since it requires that the timestamp is in real time. Saves 10 bytes per packet!
simpleZstdLevel 3 (since 4.0.0, > 4.5.1 recommended) When zstd is enabled using simpleCompression=zstd, this is the compression level to use. Increasing or decreasing this value will effect the CPU usage of capture, with the default value being a good balance between CPU and compression. It is better to increase the simpleCompressionBlockSize setting before increasing simpleZstdLevel.

Writer - S3 - Settings

See S3 document for more information about using S3 for PCAP storage.
Setting Default Description
s3AccessKeyId (Since 2.1) Obtained from the EC2 IAM Role. S3 Access Key Id
s3Bucket EMPTY S3 Bucket to store pcap info
s3Compress false Should http requests be compressed before sending, it is usually better to enable pcap compression instead.
s3Compression zstd (>= 5.0) EMPTY (4.3 - 5.0) (Since 4.3.0) Select which type of compression is used when storing PCAP in S3. Valid values are EMPTY (same as none), none, gzip, zstd.
s3CompressionBlockSize 100000 (Since 4.3.0) The block size to use when compressing PCAP for S3 storage. Do not change to values larger than 100000 if still using both older and newer versions of Arkime.
s3CompressionLevel 0 (Since 4.3.0) Select which compression level to use with gzip or zstd. 0 will use the default level.
s3ExpireDays EMPTY Expiration days for S3 stored PCAP files. Expired PCAP files will be deleted.
s3GapPacketPos true (Since 5.0.0) encode packetPos using a simple gap encoding, this reduces storage in OpenSearch/Elasticsearch
s3Host EMPTY Override the default endpoint URL for the specified Bucket and Region. This is only used if you want to use a third party s3 server, like a MinIO or pithos instance.
s3MaxConns 20 Max connections to S3
s3MaxRequests 500 Max number of outstanding S3 requests
s3PathAccessStyle true if there is a period in the bucket name (Since 2.1) If true use s3.amazonaws.com/s3Bucket if false use s3Bucket.s3.amazonaws.com. For minio make sure you set to true.
s3Region us-east-1 S3 Region to send data to
s3SecretAccessKey (Since 2.1) Obtained from the EC2 IAM Role. S3 Secret Access Key
s3StorageClass STANDARD (Since 2.1) The S3 storage class to use&emdash;this has pricing implications
s3UseECSEnv false (Since 4.3.0) Use the ECS_CONTAINER_METADATA_URI_V4 and AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variables to find the metadata service.
s3UseHttp false Use HTTP instead of HTTPS to connect to S3 host.
s3UseTokenForMetadata true If true then use IMDSv2 token to retrieve instance metadata and IAM credentials when running on EC2; if false then use IMDSv1
s3WriteGzip false (Since 2.1) Should the PCAP files be stored on S3 in gzipped form. Since 4.3.0 if s3Compression is set this setting is ignored. Since 5.0.0 this setting has been removed.

Special Sections

Besides the normal [default] and [node] sections, Capture/Viewer support several special configuration file sections. These sections are used to configure custom fields, views, and other more complex features. Be careful to not place normal config items in these special sections.

Special Sections - custom-fields

You can add custom fields to Arkime several ways, including the wise and tagger plugins. Since Arkime 1.5.2 the easiest way is to use a [custom-fields] section in the ini file. At capture startup it will check to make sure all those fields exist in the database. The format of the line is the similar as used in wise and tagger, except you use expression=definition. You will need to also create a [custom-views] section to display the data on the Sessions tab.

Setting Default Description
[key]= Unique name The unique name for this field, usually the field expression
count false Track number of items with a .cnt field auto created
db REQUIRED The DB field name
field [key] The field expression, overrides the key in the ini file
friendly [fieldname] A SHORT description, used in SPI View
group [Before first dot in field or general] Category for SPI view
help [fieldname] Help to display in about box or help page
kind REQUIRED
  • integer
  • ip
  • lotermfield - lowercased string
  • termfield - string
  • uptermfield - uppercased string
nolinked false (Since 2.1.0) Set to true for linked sessions to have independent values for this field
noutf8 false (Since 2.1.1) Set to true for the field to be 8 bit instead of utf8
viewerOnly false (Since 4.5.0) When true this field is added to the field definition database, but is NOT loaded into capture, which saves capture memory. Use this for session fields that are added by software other than capture/wise.

Example:

[custom-fields]
# Format is FieldExpr=text format
theexpression=db:theexpression

sample.md5=db:sample.md5;kind:lotermfield;friendly:Sample MD5;count:true;help:MD5 of the sample

Special Sections - custom-fields-remap

Starting with 5.0 it is now possible to remap results turned by wise or the tagger plugin to set different fields based on what field matched. For example instead of only being able to set the asset field based on if the ip.src or ip.dst field matched, it is now possible to set asset.src when ip.src matches or asset.dst when ip.dst matches. This is done by remapping the asset field.

Setting Default Description
[oldField]= Unique field expression The field that we want to remap
matchField=newField EMPTY When the value we are matching against is 'match field' instead of setting 'oldFIeld' we set 'newField'

Example:

[custom-fields-remap]
asset=ip.src=asset.src;ip.dst=asset.dst

[custom-fields]
asset.src=kind:lotermfield;count:true;friendly:Asset Src;db:assetName.src;help:Asset Name Src
asset.dst=kind:lotermfield;count:true;friendly:Aseet Dst;db:assetName.dst;help:Asset Name Dst

Special Sections - custom-views

With Arkime "views" are how the SPI data is displayed in the Sessions tab. Usually there is a unique "view" for each category of data. You can add custom views to Arkime several ways, including the wise and tagger plugins. Since Arkime 1.5.2 the easiest way is to use a [custom-views] section in the ini file. At viewer startup, a new section will be created for each entry. The format of the line is name=definition. Viewer will sort all views by name when choosing the order to display in.

Setting Default Description
[key]= Unique name The unique name for this view
fields REQUIRED A comma separated list of field expression to display. They will be displayed in order listed.
require REQUIRED The db session name to require be set to show the section.
title Defaults to name The title to get the section on Sessions tab

Example:

[custom-views]
sample=title:Samples;require:sample;fields:sample.md5,sample.house

[custom-fields]
sample.md5=db:sample.md5;kind:lotermfield;friendly:Sample MD5;count:true;help:MD5 of the sample
sample.house=db:sample.house;kind:termfield;friendly:Sample House;count:true;help:House the sample lives in

Special Sections - headers-email

This section makes it easy to specify email headers to index. They will be searchable in the UI using email.[HEADERNAME]

Setting Default Description
[header name]= REQUIRED The header name
count false Create a second field email.[HEADERNAME].cnt with the number of items
type REQUIRED
  • string - index as a string
  • integer - index as an integer
  • ip - index as an IP
unique true Only index unique values

Example:

[headers-email]
x-priority=type:integer

Special Sections - headers-http-request

This section makes it easy to specify HTTP Request headers to index. They will be searchable in the UI using http.[HEADERNAME]

Setting Default Description
[header name]= REQUIRED The header name
count false Create a second field http.[HEADERNAME].cnt with the number of items
type REQUIRED
  • string - index as a string
  • integer - index as an integer
  • ip - index as an IP
unique true Only index unique values

Example:

[headers-http-request]
referer=type:string;count:true;unique:true

Special Sections - headers-http-response

This section makes it easy to specify HTTP Response headers to index. They will be searchable in the UI using http.[HEADERNAME]

Setting Default Description
[header name]= REQUIRED The header name
count false Create a second field http.[HEADERNAME].cnt with the number of items
type REQUIRED
  • string - index as a string
  • integer - index as an integer
  • ip - index as an IP
unique true Only index unique values

Example:

[headers-http-response]
location=type:string
server=type:string

Special Sections - override-ips

override-ips is a special section that overrides the MaxMind databases for the fields set, but fields not set will still use MaxMind (example if you set tags but not country it will use MaxMind for the country) Spaces and capitalization are very important.

Since 5.0.0 you can now create secondary ini files that will auto reload on changes by setting the overrideIpsFiles setting.

Setting Default Description
[cidr]= Unique CIDR The CIDR the line applies to
[expression]: Since 5.0 any expression can be set, for example asset:computer will set the asset field to computer.
asn An ASN value to set for matches
country A 3 character country code to set for matches
rir A RIR value to set for matches
tag A single tag to set for matches

Example:

[override-ips]
10.1.0.0/16=tag:ny-office;country:USA;asn:AS0000 This is neat

Special Sections - packet-drop-ips

This section allows you to specify ips or cidrs to drop from being processed. This is different from a bpf filter since the packets will actually reach capture (and counted) but won't be fully processed. However if you have many ranges/ips to drop it can be more efficient then bpfs. It is possible to also allow ranges inside of dropped ranges using the "allow" keyword. Order added doesn't matter, searching always finds the best match.

Since 5.0.0 you can now create secondary ini files that will auto reload on changes by setting the packetDropIpsFiles setting.

[packet-drop-ips]
10.0.0.0/8=drop
10.10.0.0/16=allow
10.10.10.0/24=drop
10.10.10.10=allow

Special Sections - remote-clusters

The remote-clusters (formerly moloch-clusters) section is used to describe the various Arkime clustersthat are available to forward traffic to either manually or through the periodic query functionality. Each line represents a single cluster, with the name just being any unique string.

Setting Default Description
[key]= Unique name The unique name for this cluster
name REQUIRED Friendly name to display in UI
serverSecret [serverSecret of current cluster] The serverSecret for the remote cluster, if it is different from current cluster
url REQUIRED The base url to use to contact cluster

Example:

[remote-clusters]
cluster1=url:https://arkime.example.com:8005;serverSecret:password;name:Cluster
cluster2=url:https://cluster2.example.com:8005;serverSecret:foo;name:Test Cluster

Special Sections - Multi Viewer Settings

The multi viewer is useful when you have multiple Arkime clusters that you want to search across. To use the multi viewer, an extra viewer process and a multies process must be started. The viewer process works like a normal viewer process, except instead of talking to a OpenSearch/Elasticsearch server, it talks to a multies server. The multies server proxies the queries to all the real OpenSearch/Elasticsearch servers. These two processes can share the same config file and node name section. The viewer part uses the SAME configuration values as above if you need to set anything.

Setting Default Description
certFile EMPTY Public certificate to use for https, if not set then http will be used. keyFile must also be set.
keyFile EMPTY Private certificate to use for https, if not set then http will be used. certFile must also be set.
multiES false This is the multiES node
multiESHost EMPTY Host interface that multies.js should listen on
multiESNodes EMPTY Semicolon separated list of OpenSearch/Elasticsearch nodes that MultiES should connect to. The first node listed will be considered the primary node and is used for users/views/queries.
Example: http://escluster1.example.com:9200,name:escluster1,prefix:PREFIX,elasticsearchAPIKey:testkey1;http://escluster2.example.com:9200,name:escluster2
Components (comma separated):
  • (required) the first part is the node http[s]://[user:password@]host:port. If using 3.1 or later it is suggested to use elasticsearchBasicAuth or elasticsearchAPIKey is the user/pass isn't logged.
  • (required since 2.7.0) name: element to name the OpenSearch/Elasticsearch node.
  • (required) prefix: element can follow each host if that cluster was setup with an OpenSearch/Elasticsearch prefix. For 3.x or later it should be prefix:arkime if you didn't set a prefix.
  • (optional since 3.1.0) elasticsearchAPIKey: element to set an Elasticsearch API Key per node. See elasticsearchAPIKey settings for more information on how to configure ES API Keys.
  • (optional since 3.1.0) elasticsearchBasicAuth: element to set an OpenSearch/Elasticsearch Basic Auth per node. See elasticsearchBasicAuth settings for more information on how to encode Basic Auth settings, you MUST use the base64 version with multies.
multiESPort 8200 Port that multies.js should listen on

A sample configuration for multi viewer (the elasticsearch variable points to the multies.js process):

[multi-viewer]
elasticsearch=127.0.0.1:8200
viewPort = 8009
#viewHost = localhost
multiES = true
multiESPort = 8200
multiESHost = localhost
multiESNodes = http://escluster1.example.com:9200,name:escluster1,prefix:PREFIX,elasticsearchAPIKey:testkey1;http://escluster2.example.com:9200,name:escluster2

Which you would then use by starting both the multiviewer and multies. This is a sample for running manually (but you should setup startup scripts to run for real):

cd /opt/arkime/viewer
/opt/arkime/bin/node multies.js -n multi-viewer
/opt/arkime/bin/node viewer.js -n multi-viewer

Special Sections - wise-types

WISE also lets you configure which fields are used for the standard wise types and you can add your own wise types. You do this by creating a [wise-types] section in the capture configuration file AND listing the fields using `{type}={expression};{db:dbfield}...`. The type field must be less then 12 characters, and is the same type field you would use in the wise service.

Setting Default Description
domain db:http.host;db:dns.host
email db:email.src;db:email.dst
ip db:http.xffIp srcIp and dstIp are always looked up for ip
ja3 db:tls.ja3
md5 db:http.md5;db:email.md5
sha256 db:http.sha256;db:email.sha256 supportSha256 must be set to true in your config file
url db:http.uri

Value Actions (previously right-click)

It is possible to configure right click actions on any SPI data fields. Right click actions can be added based on either the field name or the category of the data. They can be added either in the configuration file or by enabled WISE data sources.

Configuration File Sample:

[value-actions]
VTIP=url:https://www.virustotal.com/en/ip-address/%TEXT%/information/;name:Virus Total IP;category:ip
VTHOST=url:https://www.virustotal.com/en/domain/%HOST%/information/;name:Virus Total Host;category:host
VTURL=url:https://www.virustotal.com/latest-scan/%URL%;name:Virus Total URL;category:url
KBN=url:https://localhost:5601/app/discover#/?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'%ISOSTART%',to:'%ISOSTOP%'))&_a=(columns:!(_source),index:'filebeat-*',interval:auto,query:(language:kuery,query:'network.community_id: "%URIEncodedText%"'));name:Discover communityId;fields:communityId

Each line in the valueactions section contains possible actions:

Piece Notes
[key]= The key must be unique and is also used as the right click menu name if the name field is missing
actionType Needs a url. Supported actionTypes:
  • fetch: Information will be fetched and displayed in the rightclick menu for 5 seconds
  • empty: Nothing is done on value action click
all (Since 4.3.2) Using all:true shows this item on all values.
func A javascript function body (feeds into new Function()) that returns a right click action formatted as an object. This is the recommended way if you need to omit the url from an action and can be used to place plain text in the menu. Value is not calculated so one of the keys in the returned action object must be "value" if you want to show name:value pair in the menu (but not req). Formatting substitutions will also be ignored.
url The url to open if selected. There are some basic URL substitutions defined below.
name The menu text to display
category If the field that is right clicked on has this category then display this menu item. All right click entries must have either a category or fields variable defined.
fields A comma separated list of field names. If the field that is right clicked on has one the field expressions in the list then display this menu item. All right click entries must have either a category or fields variable defined.
regex A regex to use on the right clicked text to extract a piece for the URL. The first matching group is substituted for %REGEX% in the url. If the regex doesn't match at all then the menu isn't displayed.
users A comma separated list of user names that can see the right click item. If not set then all users can see the right click item.
notUsers (Since 3.0) A comma separated list of user names that can NOT see the right click item. This setting is applied before the users setting above.

The possible URL Substitutions are:

URL Substitutions Notes
%TEXT% The text clicked on in raw form
%URIEncodedText% The text clicked on, URI encoded
%UCTEXT% The text clicked on, upper-cased
%URL% A URL extracted from the text clicked on
%HOST% A hostname extracted from the text clicked on. Sometimes the same as %TEXT%, sometimes a subset.
%REGEX% The first regex group match
%EXPRESSION% The search expression, URI encoded
%DATE% The search date/time range, eg., startTime=1567447943&stopTime=1569607943 (UNIX seconds) for a custom date range or date=336 (hours) for times relative to now
%ISOSTART% The beginning of the search date/time range, ISO_8601 formatted
%ISOSTOP% The end of the search date/time range, ISO_8601 formatted
%FIELD% The field name as it would be referenced in a Arkime search expression
%DBFIELD% The field name as referenced in the underlying OpenSearch/Elasticsearch database
%NODE% The name of the capture node
%ID% The session ID

The categories that Arkime uses are:

Category Notes
asn An ASN field
country A three letter country code
host A domain or host name
ip An ip address
md5 A md5 of a payload, such as for http body or smtp attachment
port The TCP/UDP port
rir The Regional Internet Registry
url A URL
user A user name or email address

Field Actions

It is possible to configure actions to add to any SPI data field label. Field actions can be added based on either the field name or the category of the data. They can be added either in the configuration file or by enabled WISE data sources.

Configuration File Sample:

[field-actions]
TEST=url:https://www.test.com/?query=%EXPRESSION%:Test Action %FIELNAME%/;category:ip;users:testuser
      

Each line in the fieldactions section contains possible actions:

Piece Notes
[key]= The key must be unique and is also used as the field action menu name if the name field is missing
url The url to open if selected. There are some basic URL substitutions defined below.
name The menu text to display (can include these substitutions)
all (Since 4.3.2) Using all:true shows this item on all fields.
category If the field is in this category display this menu item. All field action entries must have either a category or fields variable defined. see categories
fields A comma separated list of field names. If the field is in the list then display this menu item. All field action entries must have either a category or fields variable defined.
users A comma separated list of user names that can see the field action item. If not set then all users can see the field action item.
notUsers A comma separated list of user names that can NOT see the field action item. This setting is applied before the users setting above.

The possible URL Substitutions are:

URL Substitutions Notes
%EXPRESSION% The search expression, URI encoded
%DATE% The search date/time range, eg., startTime=1567447943&stopTime=1569607943 (UNIX seconds) for a custom date range or date=336 (hours) for times relative to now
%ISOSTART% The beginning of the search date/time range, ISO_8601 formatted
%ISOSTOP% The end of the search date/time range, ISO_8601 formatted
%FIELD% The field name as it would be referenced in a Arkime search expression
%DBFIELD% The field name as referenced in the underlying OpenSearch/Elasticsearch database

The possible Field Action Menu Name Substitutions are:

Name Substitutions Notes
%FIELDNAME% The friendly (readable) field name
%FIELD% The field name as it would be referenced in a Arkime search expression
%DBFIELD% The field name as referenced in the underlying OpenSearch/Elasticsearch database

PCAP at Rest Encryption

Arkime provides support for PCAP at rest encodings. Two forms of encodings are currently supported: "aes-256-ctr" and "xor-2048". Please note that xor-2048 is not actually secure and is only for testing.

The current implementation is based around openssl. The encoding that you wish to use is configured in the configuration file by setting the simpleEncoding variable. The simpleEncoding may be set either globally or per node.

Each file on disk is encoded by a unique data encryption key (dek). The dek is encrypted using a key encryption key (kek) when stored. The encrypted dek, the id of the kek, and the initialization vector (IV) are all stored per file in OpenSearch/Elasticsearch. Which kek is used when creating files, is selected with the simpleKEKId variable. The simpleKEKId may be set either globally or per node.

The kek passwords that may be used should be placed in a [keks] section of the configuration file. There is one line for each kekid to kek mapping. An easy way to create kek passwords is openssl rand -base64 30. Remember a kek is password used to encrypt the dek and NOT the password used to encrypt the files. The dek is what is used to encrypt the files, and is unique per file.

You MUST secure your configuration file.

You are not required to use the same keks on all nodes, however, you can if you wish. It is recommended that you rotate your keks occasionally (timing dependent on your risk tolerance) and create new keks to be used. Do NOT delete the old keks until all pcaps which have been encoded with those keks have been expired.

Currently it is not possible to re-encrypt a data encryption key, however, this should be possible in the future with a db.pl command.

Example:

[default]
pcapWriteMethod=simple
simpleEncoding=aes-256-ctr
simpleKEKId=kekid1

[keks]
kekid1=Randomkekpassword1
kekid2=Randomkekpassword2

Advantages:

Disadvantages:

Plugins

Arkime supports several different plugins that can be added to the capture process.

Plugins - CHAD

CHAD creates hashes for http and smtp headers.

Setting Default Description
chadHTTPIgnores X-IPINTELL;rpauserdata;rspauth;x-novinet;x-is-aol;x-lb-client-ip;x-lb-client-ssl;x-ssl-offload;dnt;X-CHAD;X-QS-CHAD;X-POST-CHAD;X-OREO-CHAD HTTP headers to ignore
chadHTTPItems default Headers that are calculated in chad value
chadSMTPIgnores x-freebsd-cvs-branch;x-beenthere;x-mailman-version;list-unsubscribe;list-subscribe;list-id;list-archive;list-post;list-help;x-return-path-hint;x-roving-id;x-lumos-senderid;x-roving-campaignid;x-roving-streamid;x-server-id;x-antiabuse;x-aol-ip;x-originalarrivaltime SMTP headers to ignore
chadSMTPItems default Headers that are calculated in chad value

Plugins - Kafka

(Since 4.2.0) Arkime can send sessions to kafka instead of directly to OpenSearch/Elasticsearch. Arkime will still need to directly use OpenSearch/Elasticsearch for other data store. You will need to create a kafka consumer that processes the docs and inserts them into OpenSearch/Elasticsearch using either the bulk header or index field to determine which index to insert into. To enable add kafka.so to the plugins= line in the config file.

(Since 5.0.0) It is possible to have full control of the librdkafka configuration by creating a new [kafka-config] section in the Arkime config file. Each item is an entry from the Global Configuration section and is applied AFTER the Arkime settings below.

Setting Default Description
kafkaBootstrapServers EMPTY Comma separated list of bootstrap servers to use
kafkaMsgFormat bulk How to send the SPI data:
  • bulk - raw bulk msg
  • bulk1 - bulk formatted, but just 1 doc
  • doc - just the doc, added "index" field with the OpenSearch/Elasticearch index to send to
kafkaSSL false Enable SSL
kafkaSSLCALocation EMPTY Path where the SSL CA is located
kafkaSSLCertificateLocation EMPTY Path where the SSL client certificate is located
kafkaSSLKeyLocation EMPTY Path where the SSL cilent key is located
kafkaSSLKeyPassword EMPTY Optional password for the client key
kafkaTopic arkime-json Topic to send the sessions to

Plugins - Lua

Arkime allows you to create simple lua scripts. See the Lua README for more information

Setting Default Description
luaFiles EMPTY The Lua Files to load

Plugins - JA4+

Arkime supports JA4 and JA4+ algorithms starting from version 5 onwards.

JA4 algorithm

JA4, the TLS Client Fingerprinting portion, is built into the Arkime capture binary. After installing Arkime 5 or later, you will automatically see tls.ja4 field show up.

JA4+ algorithms

JA4+ algorithms have licensing requirements. Please familiarize yourself with them before installing/enabling the JA4+ portions of Arkime.

To install/enable the JA4+ algorithms, you need to:

Setting Default Description
ja4Raw false Enable to generate the JA4 raw Arkime fields.

Plugins - Netflow

Arkime can generate netflow for all sessions it saves SPI data for. Add netflow.so to the plugins= line in the config file.

Setting Default Description
netflowDestinations EMPTY Semicolon ';' separated list of host:port destinations to send the netflow
netflowSNMPInput 0 What to fill in the input field
netflowSNMPOutput 0 What to fill in the output field
netflowVersion 5 Version of netflow to send: 1, 5, 7

Plugins - ScrubSPI

Allows you to scrub data before sending to OpenSearch/Elasticsearch. To use add to plugins and create a configuration called scrubspi with each entry being a field with regex replace statement.

[scrubspi]
http.uri=/github/foohub/
asn.dst=:FOO:BAR:

Plugins - Suricata

Arkime can enrich sessions with Suricata alerts. Suricata and Arkime must see the same traffic and share the same eve.json/alert.json for the plugin to work. Sessions that have been enriched will have several new fields, all starting with suricata, and will be displayed in a Suricata section of the standard Arkime session UI. Arkime matches sessions based on the 5 tuple from the alert.json or eve.json file, only using the items with event_type of alert. A very simple query to find all sessions that have Suricata data is suricata.signature == EXISTS!.

Note: there isn't a special Suricata UI inside Arkime, this is just adding new fields to Arkime sessions like wise or tagger do. The Suricata enrichment is done by capture, so capture must see the traffic.

Add suricata.so to the plugins= line in the config file.

Setting Default Description
suricataAlertFile REQUIRED The full path to either the alert.json or eve.json file, make sure the dropUser or dropGroup can open the file
suricataExpireMinutes 60 (Since 1.5.1) The number of minutes to keep Suricata alerts in memory before expiring them based on the Suricata alert timestamp. For example if a Suricata alert has a timestamp of 1am, the default is to keep looking for matching traffic until 2am (60 minutes). If reading offline pcap you'll want to increase this number to cover how old the pcap is.

Sample Config:

# Add suricata.so to your plugins line, or add a new plugins line
plugins=suricata.so

# suricataAlertFile should be the full path to your alert.json or eve.json file
suricataAlertFile=/nids/suricata/eve.json

Plugins - Tagger

See the Tagger page and the Tagger Format page

Plugins - TCP Health Check

TCP Health Check plugin enables the capture module to listen on a specific TCP port and immediately close accepted connections. This is useful to keep load balancer health checks informed about the capture module working properly. One specific use case is Health Checks for Target Groups in AWS Network Load Balancer.

Add tcphealthcheck.so to the plugins= line in the config file.

Setting Default Description
tcpHealthCheckPort EMPTY Make the capture module listen on this TCP port for health checks

Plugins - WISE

The With Intelligence See Everyting (WISE) plugin is used to communitcate to the wiseService Arkime application. Learn more about the wiseServce.

Each capture node needs to have the wise plugin enabled. You will need to make a few changes to the [default] section of the configuration file.

  1. Add wiseURL=http://WISEHOST:8081
  2. Enable the plugin with by adding wise.so to the `plugins=` variable.

Each viewer node that a operator uses needs to have the wise plugin enabled. You will need to make a few changes to the [default] section of the configuration file.

  1. Add wiseURL=http://WISEHOST:8081
  2. Enable the plugin with by adding wise.js to the viewerPlugins= variable.

Usually just setting wiseURL is enough.

Setting Default Description
wiseCacheSecs 600 Number of seconds to cache results before asking wiseService again
wiseExcludeDomains .in-addr.arpa;.ip6.arpa A semicolon separate list of domain suffixes to not send to wise
wiseHost 127.0.0.1 Host to connect to for wiseService, not used if wiseURL is set
wiseLogEvery 10000 Log wise stats every X wise requests
wiseMaxCache 100000 Max number of items to store in the wise cache that is local to each arkime-capture node
wiseMaxConns 10 Number of connections to wiseService, this is also the number of concurrent wise queries.
wiseMaxRequests 100 Number of oustanding requests to the wiseService
wisePort 8081 Port the wiseService is listening on, not used if wiseURL is set
wiseTcpTupleLookups false Should we send tcp tuple lookups to wise
wiseURL EMPTY (Since 1.5.0) The url to use to connect to wise
wiseUdpTupleLookups false Should we send udp tuple lookups to wise

WISE also lets you configure which fields are used for the standard wise types and you can add your own wise types. You do this by creating a [wise-types] section in the capture configuration file AND listing the fields using `{type}={expression};{db:dbfield}...`. The type field must be less then 12 characters, and is the same type field you would use in the wise service.

High Performance Settings

Sample config items for max performance. Most of the defaults are fine. Reading Arkime FAQ - Why am I dropping packets and https://github.com/pevma/SEPTun or https://github.com/pevma/SEPTun-Mark-II may be helpful.

# MOST IMPORTANT, use basic magicMode, libfile kills performance
magicMode=basic

# Disable compression (since 4.0)
simpleCompression=none

pcapReadMethod=tpacketv3
tpacketv3BlockSize=8388608

# Increase by 1 if still getting Input Drops
tpacketv3NumThreads=2

pcapWriteMethod=simple
pcapWriteSize=4194304

# Start with 5 packet threads, increase by 1 if getting thread drops.  Should be about 2 x Gbps that need to be captured
packetThreads=5

# Increase the size of ES messages and compress them for lower traffic rates
dbBulkSize=4000000
compressES=true
dbEsHealthCheck=false

# Approximate max streams that will be monitored
maxStreams=2000000

# Set to number of packets a second, if still overflowing try 400k
maxPacketsInQueue = 300000

# Uncomment to disable features you don't need
# parseQSValue=false
# parseCookieValue=false

The following rules can help greatly reduce the number of SPI sessions being written to disk:

---
version: 1
rules:
- name: "Dont write tls packets or check yara after 10 packets, still save SPI"
  when: "fieldSet"
  fields:
    protocols:
    - tls
  ops:
    _dontCheckYara: 1
    _maxPacketsToSave: 10

- name: "Dont save SPI sessions to ES with only 1 src packet"
  when: "beforeFinalSave"
  fields:
    packets.src: 1
    packets.dst: 0
    tcpflags.syn: 1
  ops:
    _dontSaveSPI: 1

- name: "Dont save SPI data for listed hostnames tracked by dst ip:port, use on cloud destinations"
  when: "fieldSet"
  fields:
    host.http:
    - ad.doubleclick.net
    protocols:
    - tls
  ops:
    _dontSaveSPI: 1
    _maxPacketsToSave: 1
    _dropByDst: 10

Lab Settings

Arkime by default is not configured to monitor low bandwidth networks. You'll want to update the configuration to have a better experience when testing in a lab or low bandwidth setting.

# Only use 1 packet thread so only 1 file is written at a time
packetThreads=1

# Disable compression since this will cause buffering and
# cause viewer to not show packets until buffer is written
simpleCompression=none

# Lower pcap buffer to smallest possible, although Arkime will write pageSize
# blocks if a timer fires.
pcapWriteSize=65536

# Lower the number of streams expected for lower memory usage
maxStreams=1500

# Decrease the size of writes to DB
dbBulkSize=100000
maxESConns=2

Cont3xt

Cont3xt, by default, utilizes configuration files using INI format. The configuration file is located at /opt/arkime/etc/cont3xt.ini, but its location can be altered using the -c command-line option. With the release of Arkime 5, support was introduced for configuration files in JSON and YAML formats. These must be have json or yaml extensions, respectively. When utilizing JSON or YAML, arrays can be specified either natively or using separators like INI.

The configuration file can contain sections for Cont3xt itself and then sections for each integration. The sections for each integration are optional and can be used to set specific settings for each integration. Usually, integration keys will be set up by each user, but it is possible to have global keys.

General

These settings live in the [cont3xt] section.
Setting Default Description
cachePolicy shared Can be shared or user, if set to user then the cache is per user, can be overriden per integration
cacheTimeout 1h How long to cache results for integrations, can be overridden per integration
cont3xtHost EMPTY What hostname to bind to
expireHistoryDays 180 How long to store the cont3xt history
geoLite2ASN /usr/share/GeoIP/GeoLite2-ASN.mmdb;/opt/arkime/etc/GeoLite2-ASN.mmdb A Maxmind account is required to use this feature. We recommend installing and setting up the geoipupdate program included with most Linux releases.
Semicolon ';' separated list of maxmind geoip country files. The first file found will be used. If no files are found a warning will be issued. To disable warning set to a blank string.
Download free version
geoLite2Country /usr/share/GeoIP/GeoLite2-Country.mmdb;/opt/arkime/etc/GeoLite2-Country.mmdb A Maxmind account is required to use this feature. We recommend installing and setting up the geoipupdate program included with most Linux releases.
Semicolon ';' separated list of maxmind geoip country files. The first file found will be used. If no files are found a warning will be issued. To disable warning set to a blank string.
Download free version
hstsHeader false Set the HSTS header on requests to cont3xt
port 3218 The port that the cont3xt service listens on
userAgent cont3xt The http user-agent header to use when talking to remote services, can by override per integration

DB

These also live in the [cont3xt] section.
Setting Default Description
dbUrl EMPTY Use this setting to override the elasticsearch setting, useful if running cont3xt standalone with no OpenSearch/Elasticsearch. Either lmdb:DIRECTORY or an OpenSearch/Elasticsearch URL for storing cont3xt data.
elasticsearch http://localhost:9200 The OpenSearch/Elasticsearch URL to use, dbUrl overrides this setting If OpenSearch/Elasticsearch requires a user/password those can be placed in the url also, http://user:pass@hostname:port or use elasticsearchBasicAuth
elasticsearchAPIKey EMPTY Use an Elasticsearch API key for access without requiring basic authentication. Once you have created an API Key, you must base64 encode the id and api_key joined by a colon. echo -n "id:api_key" | base64 is one way to generate the base64 key. Notice the -n, you have to make sure you don't encode an extra newline.
elasticsearchBasicAuth EMPTY Use basic auth with OpenSearch/Elasticsearch. The value can either be the plain text "user:pass" or the base64 encoded version. One way to generate base64 echo -n "username:password" | base64 version. Notice the -n, you have to make sure you don't encode an extra newline. All Arkime versions also support http://user:pass@hostname:port in the elasticsearch setting.
elasticsearchTimeout 300 Approximate timeout for most requests to OpenSearch/Elasticsearch. OpenSearch/Elasticsearch will automatically cancel any request after this expires.
esClientCert EMPTY The public key file to use for tls client authentication with OpenSearch/Elasticsearch. Must also set esClientKey.
esClientKey EMPTY The private key file to use for tls client authentication with OpenSearch/Elasticsearch. Must also set esClientCert.
esClientKeyPass EMPTY The password for the esClientKey setting.
usersUrl EMPTY Use this setting to override the usersElasticsearch setting, useful if running cont3xt standalone with no OpenSearch/Elasticsearch. Can be lmdb:DIRECTORY, redis:URL or an OpenSearch/Elasticsearch URL.

Caching

Cont3xt can cache integration queries to speed up results and to lower the load on the integration services. These settings live in the [cache] section
Setting Default Description
cacheSize 100000 Maximum number of results to cache in memory, used for all but lmdb
cacheTimeout 24 hours In seconds the MAX time to cache any item, used by redis/memcachd
lmdbDir EMPTY Path where to create the lmdb cache directory
memcachedURL EMPTY Format is memcached://[user:pass@]server1[:11211],[user:pass@]server2[:11211],...
redisURL EMPTY Format is redis://[:password@]host:port/db-number, redis-sentinel://[[sentinelPassword]:[password]@]host[:port]/redis-name/db-number, or redis-cluster://[:password@]host:port/db-number
type memory memory, redis, memcached, lmdb are supported. lmdb is recommend for a local disk cache and redis for a shared cache.

Common settings per integration

Every integration can have its own settings with keys, passwords and other things. Usually keys and passwords should be set per user in the UI, which will override these. These live in each [INTEGRATION-NAME] section
Setting Default Description
cachePolicy cont3xt.cachePolicy Can be shared or user, if set to user then the cache is per user
cacheTimeout cont3xt.cacheTimeout How long to cache results for this integration
disabled false If set to true users can NOT use this integration
userAgent [cont3xt].userAgent (Since 5.0) The http userAgent to use for any requests this integration makes
viewRoles EMPTY (Since 5.0) List of roles that the user must have to use the integration. Use this if some integrations should be limited to certain users.

Arkime Integration

Since 5.0 Cont3xt can query Arkime for results Create a [arkime:NAME] section where NAME is a unique name for all integrations
Setting Default Description
arkimeUrl http://localhost:8005 The url to the Arkime UI
elasticsearch http://localhost:9200 The OpenSearch/Elasticsearch url, see [settings](settings#elasticsearch)
elasticsearchAPIKey EMPTY The Elasticsearch API key, see [settings](settings#elasticsearchAPIKey)
elasticsearchBasicAuth EMPTY The OpenSearch/Elasticsearch basic auth information, see [settings](settings#elasticsearchBasicAuth)
icon icon for integration in UI Path to icon to use in UI
insecure EMPTY If true the connection to OpenSearch/Elasticsearch will disable certificate verification for https calls
maxResults 20 The OpenSearch/Elasticsearch basic auth information, see [settings](settings#elasticsearchBasicAuth)
name section name The friendly name to show the user in the UI
order 50000 The sort order for the integration
prefix arkime The prefix used for the OpenSearch/Elasticsearch indices, see [settings](settings#prefix)
searchDays -1 == ALL The number of past days to search Arkime

CSV Integration

Since 5.0 Cont3xt can query CSV files or urls for results. Create a [csv:NAME] section where NAME is a unique name for all integrations
Setting Default Description
icon icon for integration in UI Path to icon to use in UI
itypes REQUIRED Comma separated list of itypes this integration supports
keyColumn REQUIRED Which column, by name, contains the value to lookup against.
name section name The friendly name to show the user in the UI
order 50000 The sort order for the integration
reload EMPTY How often in minutes to reload the CSV file. For file urls cont3xt will monitor the files for changes automatically.
url REQUIRED Where to find the CSV file. It can be file://PATH, http(s)://PATH, or redis://[:pass@]redishost[:redisport]/redisDbNum/key. The URL must return the entire CSV document.

OpenSearch/Elasticsearch Integration

Since 5.0 Cont3xt can query OpenSearch/Elasticsearch for results. Create a [opensearch:NAME] or [elasticsearch:NAME] section where NAME is a unique name for all integrations
Setting Default Description
apiKey EMPTY The Elasticsearch API Key to use
basicAuth EMPTY The OpenSearch/Elasticsearch user:password to use, it can be base64 encoded
icon OpenSearch/Elasticsearch icon Path to icon to use in UI
includeId false Include the OpenSearch/Elasticsearch document id in result
includeIndex false Include the OpenSearch/Elasticsearch document index in result
index REQUIRED Which index to search
insecure EMPTY If true the connection to OpenSearch/Elasticsearch will disable certificate verification for https calls
itypes REQUIRED Comma separated list of itypes this integration supports
method search How to do the lookup, can be either get or search
name section name The friendly name to show the user in the UI
order 50000 The sort order for the integration
queryField REQUIRED if search Which field in the data to search against
timestampField EMPTY The document field that has a timestamp used to sort results by, if not set results will not be sorted.
url REQUIRED The OpenSearch/Elasticsearch URL

JSON Integration

Since 5.0 Cont3xt can query JSON files or urls for results. Create a [json:NAME] section where NAME is a unique name for all integrations
Setting Default Description
arrayPath EMPTY The path in the JSON document to find the array of values, if not set it is assume the JSON document is an array at the top level.
icon icon for integration in UI Path to icon to use in UI
itypes REQUIRED Comma separated list of itypes this integration supports
keyPath REQUIRED The path inside each item of the array that contains the key that lookups should be performed on.
name section name The friendly name to show the user in the UI
order 50000 The sort order for the integration
reload EMPTY How often in minutes to reload the JSON file. For file urls cont3xt will monitor the files for changes automatically.
url REQUIRED Where to find the JSON file. It can be file://PATH, http(s)://PATH, or redis://[:pass@]redishost[:redisport]/redisDbNum/key. The URL must return the entire JSON document.

Redis Integration

Since 5.0 Cont3xt can query Redis for results Create a [redis:NAME] section where NAME is a unique name for all integrations
Setting Default Description
icon icon for integration in UI Path to icon to use in UI
itypes REQUIRED Comma separated list of itypes this integration supports
keyTemplate %key% A template used to form the key to lookup inside redis. Two replacements are supported, %key% and %type%.
name section name The friendly name to show the user in the UI
order 50000 The sort order for the integration
redisMethod get Which redis method is used to lookup the key
url REQUIRED redis://[:pass@]redishost[:redisport]/redisDbNum.

Parliament

Parliament contains a grouped list of your Arkime clusters with links, Elasticsearch/OpenSearch health, and issues for each. You can use Parliament as a landing page for all of your Arkime clusters and as a status page to monitor the health of your clusters. See Parliament for more information.

Parliament Settings

(Since 5.0.0) Must be in the [parliament] section. This can be in its own file or in the viewer config. This config must exist and point to your Parliament JSON file using the file setting. If you were passing in port/certFile/keyFile into the command line arguments when starting Parliament, you must also include those in this config. It is recommended that you configure the Auth section in the Parliament UI on the Setting page before upgrading. The upgrade will update the config with your auth settings. Otherwise you will need to input them manually as basic passworth auth has been disabled for Parliament.

Setting Default Description
file REQUIRED The location of your Parliament json file.
parliamentHost EMPTY The ip used to listen, usually localhost for just the localhost or 0.0.0.0 for all ips. See the host section of https://nodejs.org/docs/latest-v8.x/api/net.html#net_server_listen_port_host_backlog_callback
port 8008 The port that the parliament process listens on.

Sample configuration:

[parliament]
# Where to store issues
file=../etc/parliament.dev.json

# Port to listen on
#port=8008

### Parliament supports authentication like viewer/cont3xt, make sure to use the same settings across all tools
#usersElasticsearch=http://localhost:9200
#usersPrefix=arkime
#authMode=digest
#passwordSecret=password
#httpRealm=Moloch

WISE Service

WISE Service, by default, utilizes configuration files using INI format. The configuration file is located at /opt/arkime/etc/wiseService.ini, but its location can be altered using the -c command-line option. With the release of Arkime 5, support was introduced for configuration files in JSON and YAML formats. These must be have json or yaml extensions, respectively. When utilizing JSON or YAML, arrays can be specified either natively or using separators like INI.

The configuration file can contain sections for wiseService itself and then sections for each WISE source. WISE will not try and query sources until a section is created for them.

General

General settings for WISE, located in a [wiseService] section.

Setting Default Description
port 8081 Port that wiseService listens on
sourcePath /opt/arkime/wiseService Directory to look for wise source files
wiseHost EMPTY Host that wiseService listens on

Caching

The wiseService caches all results returned by external sources. An external source is something like OpenDNS or Reverse DNS, where it is impossible to load the entire data set into memory and WISE needs to make a external query to obtain results. All caching is done in memory, however Redis can be used for a larger shared cache.
Create a [cache] section

Setting Default Description
type memory memory or redis are supported
url empty For cache engines this is the url to connect to. For redis the format is [redis:]//[[user][:password@]]host:port[/db-number]

Common Source Settings

All sources support some common settings such excluding IPs, Domains and Email addresses from lookups. It is also possible to exclude across all sources by placing the exclude config in the [wiseService] section.
Setting Default Description
cacheAgeMin 60 Number of minutes items in the cache for this source are valid for. Ignored for sources that use internal data, such as file sources.
excludeDomains EMPTY Semicolon separated list of modified glob patterns to exclude in lookups
excludeEmails EMPTY Semicolon separated list of modified glob patterns to exclude in lookups
excludeIPs EMPTY Semicolon separated list of IPs or CIDRs to exclude in lookups
fields EMPTY A "\n" separated list of fields that this source will add. Some wise sources automatically set for you. See [Tagger Format](taggerformat) for more information on the parts of a field entry.
onlyIPs EMPTY If set, only ips that match the semicolon separated list of IPs or CIDRs will be looked up
view EMPTY The view to show in session detail when opening up a session with unique fields. The value for view can either be written in simplified format or in more powerful jade format. For the jade format see Tagger Format for more information except everything has to be on one line, so replace newlines with \n. Simple format looks like require:[toplevel db name];title:[title string];fields:[field1],[field2],[fieldN]

Glob Rules:

Alien Vault

The Alien Vault data source currently uses the downloadable database that is updated often. Requires that access be purchased and configured.
Create a [alienvault] section to configure

Setting Default Description
key REQUIRED The API key

Emerging Threats Pro

The Emerging Threats Pro data source currently uses the downloadable database that is updated often. Requires that access be purchased and configured.
Create a [emergingthreats] section to configure

Setting Default Description
key REQUIRED The API key

OpenDNS Umbrella

The OpenDNS source currently uses the bulk query api and does live queries. Requires that access be purchased and configured.
Create a [opendns] section to configure

Setting Default Description
cacheSize 200000 Maximum number of results to cache
key REQUIRED The API key

ThreatQ

The ThreatQ export interval time should be configured depending on process requirements and indicator volume.
Create a [threatq] section to configure

Setting Default Description
host REQUIRED Server hostname location
key REQUIRED The API key

ThreatStream

WISE can integration with Anomali Threatstream. If doing a large amount of queries, one of the download methods is recommended. Requires that access be purchased and configured.
Create a [threatstream] section to configure.

Setting Default Description
dbFile ts.db path to the ts.db file
key REQUIRED The API key
mode zip
  • api - use the API for each query
  • zip - download the zip file daily
  • sqlite3 - use the database downloaded to machine
  • sqlite3-copy - use a copy of database downloaded to machine
user REQUIRED The API user

Splunk

The Splunk wise service can run in two different modes. It can query Splunk for every value or it can query splunk periodicly for a table of values. Many Splunk operators prefer the periodic query since they can scale for it.
Create a [splunk:UNIQUENAME] section to configure

Setting Default Description
host REQUIRED The Splunk hostname
keyColumn REQUIRED The column to use from the returned data to use as the key
periodic false Should we do periodic queries or individual queries
port REQUIRED The Splunk port
query REQUIRED The query to run against Splunk. For non periodic queries the string %%SEARCHTERM%% will be replaced with the key.
type REQUIRED any type- The type of data in the file
version 5 The Splunk api version to use

Example config that will query Splunk for all the vpn_ip to user name mappings during the last 24 hours every 60 seconds. It will then set the user field for any ip that matches.

  [splunk:users]
  type = ip
  format = json
  host = spunk.example.com
  port=5500
  username=theuser
  password=thepassword
  periodic=60
  query=search index="THEINDEX" sourcetype="vpn" assigned earliest=-24h | rex "User <(?[^>]+)>.*IPv4 Address <(?[^>]+)>" | dedup vpn_ip | table user, vpn_ip
  keyColumn=vpn_ip
  fields=field:user;shortcut:user

Elasticsearch

The Elasticsearch wise service can query OpenSearch or Elasticsearch for fields to set
Create a [elasticsearch:UNIQUENAME] section to configure

Setting Default Description
elasticsearch REQUIRED OpenSearch/Elasticsearch base url
esIndex REQUIRED The index pattern to look at
esMaxTimeMS 1 hour Timestamp field must be less then this
esResultField REQUIRED Field that is required to be in the result
esTimestampField REQUIRED The field to use in queries that has the timestamp in ms.
type REQUIRED The type of data in the file, such as ip,domain,md5,ja3,email, or something defined in `[wise-types]`

Example config that will query OpenSearch/Elasticsearch for an ip that is in the 10.172/16 space, in the index TheIndex-\*, only looking at records that have a \@timestamp field newer than 86400000ms. It looks at the `cef_ext.src` field and only looks at records that has a cef_ext.suser field set. Once it has a result it sets the user field in arkime to whatever the `cef_ext.suser` field is in the document.

  type=ip
  onlyIPs=10.172.0.0/16
  elasticsearch=http://ELKCLUSTERHOST1:9200,http://ELKCLUSTERHOST2:9200
  esIndex=TheIndex-*
  esTimestampField=@timestamp
  esQueryField=cef_ext.src
  esMaxTimeMS=86400000
  esResultField=cef_ext.suser
  fields=field:user;shortcut:cef_ext.suser

File

The wiseService can monitor multiple files. Each file needs to have its own section, with the section name starting with `file:`. The wiseService automatically notices if the file changes and reloads it.
Create a [file:UNIQUENAME] section to configure

Setting Default Description
column 0 For csv formatted files, which column is the data
file REQUIRED The file to load
format csv csv,[Tagger Format](taggerformat),json,jsonl - The format of data file
keyColumn 0 For json formatted files, which json field is the key
tags REQUIRED Comma separated list of tags to set for matches
type REQUIRED The type of data in the file, such as ip,domain,md5,ja3,email, or something defined in `[wise-types]`

CSV Example

Config File

  [file:ipcsv]
  file=./ip.wise.csv
  tags=ipwisecsv
  type=ip
  column=1
  format=csv
  #Asset field already exist, use field 0 for value. extra field is new, use field 2 for value
  fields=field:asset;shortcut:0\nfield:extra;kind:lotermfield;count:true;friendly:extra;db:extra;help:Help for Extra;shortcut:2

The CSV File

  blah1,10.0.0.3
  blah2,10.0.0.2,foo

Tagger Example

Config File

  [file:iptagger]
  file=./ip.wise.tagger
  tags=ipwisetagger
  type=ip
  format=tagger

The Tagger File

  #field:extra;kind:lotermfield;count:true;friendly:extra;db:extra;help:Help for Extra
  10.0.0.3;asset=blah1
  10.0.0.2;asset=blah2;extra=foo

JSON Example

Config File

  [file:ipcsv]
  file=./ip.wise.json
  tags=ipwisejson
  type=ip
  keyColumn=theip
  format=json
  #Asset field already exist, use field asset for value. extra field is new, use field extra for value
  fields=field:asset;shortcut:asset\nfield:extra;kind:lotermfield;count:true;friendly:extra;db:extra;help:Help for Extra;shortcut:extra\n

The JSON File

  [{"asset": "blah", "theip": "10.0.0.3"},
   {"asset": "blah2", "theip": "10.0.0.2", "extra": "foo"}]

**Note:** you use shortcut to match between fields in the JSON dictionary and the properties in OpenSearch/Elasticsearch.

JSONL Example

Added in 5.0, WISE now supports jsonl files with one full json object per line.

Config File

  [file:ipcsv]
  file=./ip.wise.jsonl
  tags=ipwisejsonl
  type=ip
  keyColumn=theip
  format=jsonl
  #Asset field already exist, use field asset for value. extra field is new, use field extra for value
  fields=field:asset;shortcut:asset\nfield:extra;kind:lotermfield;count:true;friendly:extra;db:extra;help:Help for Extra;shortcut:extra\n

The JSON File

  {"asset": "blah", "theip": "10.0.0.3"}
  {"asset": "blah2", "theip": "10.0.0.2", "extra": "foo"}

**Note:** you use shortcut to match between fields in the JSONL dictionary and the properties in OpenSearch/Elasticsearch.

Subnets Example

More complex example of the above where you want to create a new section

  [file:subnets]
  type=ip
  format=json
  file=subnets.json
  keyColumn=ip
  fields=field:subnets.description;kind:termfield;count:true;friendly:Description;db:subnets.description;help:Description;shortcut:description\nfield:subnets.securityzone;kind:termfield;count:true;friendly:Security Zone;db:subnets.securityzone;help:Security Zone;shortcut:securityZone\nfield:subnets.vlan;kind:integer;count:true;friendly:Vlan;db:subnets.vlan;help:Vlan;shortcut:vlan\nfield:subnets.site;kind:termfield;count:true;friendly:Site;db:subnets.site;help:Site;shortcut:site
  # Jade view method
  view=if (session.subnets)\n    +arrayList(session.subnets, 'description', 'Description', 'subnets.description')\n    +arrayList(session.subnets, 'label', 'Label', 'subnets.label')\n    +arrayList(session.subnets, 'securityzone', 'Security Zone', 'subnets.securityzone')\n    +arrayList(session.subnets, 'vlan', 'Vlan', 'subnets.vlan')\n    +arrayList(session.subnets, 'site', 'Site', 'subnets.site')
  # Simple view method
  #view=require:subnets;title:Subnets;fields:subnets.description,subnets.label,subnets:securityzone,subnets.vlan,subnets.site

The JSON File

  [{"description": "the description", "label": "interesting label", "securityzone": "hot", "vlan": 123, "site": "secret",  "ip": "10.0.0.3"},
   {"description": "the description2", "label": "interesting label2", "securityzone": "cold", "vlan": 123, "site": "secret",  "ip": "10.0.0.2"}]

Redis

Redis can be used as a wise data source. If using redis as a wise cache you'll probably want to use a different redis "database" by specifying the database number in the url. Example: `url=redis://localhost/1` Each redis source can only handle one type of data, although multiple redis sources can be configured and they can use the same redis database.
Create a [redis:UNIQUENAME] section to configure.

Setting Default Description
column 0 For csv formatted files, which column is the data
format csv csv,[Tagger Format](taggerformat),json,jsonl - The format of data file
tags REQUIRED Comma separated list of tags to set for matches
template %key% The template when forming the key name. %key% = the key being looked up, %type% = the type being looked up.
type REQUIRED The type of data in the file, such as ip,domain,md5,ja3,email, or something defined in `[wise-types]`
url REQUIRED The format is `[redis:]//[[user][:password@]]host:port[/db-number]`

ReverseDNS

For IPs that are included by the ips setting, do a reverse lookup and place everything before the first dot in the field specified.
Create a [reversedns] section to configure

Setting Default Description
cacheSize 200000 Maximum number of results to cache
field REQUIRED The field to set with the hostname
ips REQUIRED Semicolon separated list of IPs or CIDRs to lookups. Ips that don't match this list will NOT be reverse lookuped.
servers EMPTY Since 1.6.1, if set the reversedns source will use the semicolon separated list of ip addresses to reverse lookuped.
stripDomains EMPTY If EMPTY then all domains are stripped after the FIRST period. When set ONLY domains that match the semicolon separated list of domain names are modified, and only the matching part is removed. Those that don't match will be saved in full. The list is checked in order. A leading dot is recommended. For example `stripDomains=.foo.example.com;.example.com` will convert `test1.foo.example.com` to `test1`, `test2.bar.example.com` to `test2.bar` and finally `test3.bar.com` to `test3.bar.com` - Added in 0.11.4

URL

The wiseService can monitor and download URL. Each url needs to have its own section, with the section name starting with `url:`. The wiseService can automatically download and reload the files.
Create a [url:UNIQUENAME] section to configure

Setting Default Description
column 0 For csv formatted files, which column is the data
format csv csv,[Tagger Format](taggerformat),json,jsonl - The format of data file
headers Semicolon separated list of headers to send in the URL request
refresh -1 How often in minutes to refresh the file, or -1 to never refresh it
tags REQUIRED Comma separated list of tags to set for matches
type REQUIRED The type of data in the file, such as ip,domain,md5,ja3,email, or something defined in `[wise-types]`
url REQUIRED The URL to load
urlScrapePrefix EMPTY (Since 5.0) Prepend to the urlScrapeRedirect results
urlScrapeRedirect EMPTY Search the results of the URL for this RE and redirect to the match for the actual data
urlScrapeSuffix EMPTY (Since 5.0) Append to the urlScrapeRedirect results

Value Actions

Not really a WISE data source, this source monitors configured files, redis urls, or OpenSearch/Elasticsearch urls for valueactions (previously right-click) to send to all the viewer instances that connect to this WISE Server. Each file needs to have its own section, with the section name starting with `valueactions:`. The format of the monitored files is the same as [WISE](settings#wise). It will auto reload the valueactions files if they change.
Create a [valueactions:UNIQUENAME] section to configure

Setting Default Description
url REQUIRED The file to load, can be a file path, redis url (Format is redis://[:password@]host:port/db-number/key, redis-sentinel://[[sentinelPassword]:[password]@]host[:port]/redis-name/db-number/key, or redis-cluster://[:password@]host:port/db-number/key), or elasticsearch url (Format elasticsearch://host:9200/INDEX/_doc/DOCNAME with elasticsearchs:// also supported.)

So for example you might have

  [valueactions:virustotal]
  url=/opt/arkime/etc/valueactions-virustotal.ini

and then /opt/arkime/etc/valueactions-virustotal.ini could contain

  VTIP=url:https://www.virustotal.com/en/ip-address/%TEXT%/information/;name:Virus Total IP;category:ip
  VTHOST=url:https://www.virustotal.com/en/domain/%HOST%/information/;name:Virus Total Host;category:host
  VTURL=url:https://www.virustotal.com/latest-scan/%URL%;name:Virus Total URL;category:url
Arkime Logo