S3
Arkime has the ability to save PCAP to S3. The feature set and stability was greatly improved with Arkime version 2.1.0, so make sure you are use that version or newer. The expected use case is low volume capture process on each EC2 instance writing to S3 instead of a central arkime tap writing to local disk. One big benefit is only a capture process on each EC2 instance is required, you do not need to run the viewer process on each node. This mean the EC2 instances can come and go and the PCAP will still be in S3. For bucket security, the capture nodes can be configured with an S3 configuration that ONLY has PutObject permissions.
Currently all the writing from arkime-capture to S3 is sent over TLS (HTTPS S3 REST endpoint). To make these PCAPs with server side encrypted, configure S3 bucket’s default encryption settings.
Sending with compression, storing gzipped files, max connections, and max requests are configurable options. See Writer S3 Settings.
- S3 Writer is using S3 Multipart Upload API. Multipart part size is set with pcapWriteSize configuration. If pcapWriteSize is smaller than 5242880, part size is set to 5242880.
- If you set maxFileSizeG configuration to 0.1 or higher, PCAP files are uploaded with this size in GB.
- With Arkime version 2.2.0, S3 Writer supports maxFileTimeM configuration. Set this to a low value if you want to download recent PCAPs, as S3 does not allow downloading objects that have been uploaded partially.
- To get debug logs from S3 Writer, use
-d
option to run arkime capture process. - You can use a thirdparty S3 server implementation like MinIO by setting s3Host.
Sample Installation and Configuration
These instructions are still a work in progress, please provide feedback and suggestions. These instructions assume only one AWS account will be writing into the bucket using two different users. If you will be writing to a central bucket from many AWS accounts/VPCs then you will need to use a bucket policy.
- Create a S3 bucket
example-arkime-pcap
- Create a IAM policy
ArkimeCapturePolicy
- For more information about IAM Policy permissions, see AWS document for S3 Multipart Upload API and Permissions.
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::example-arkime-pcap/*"
]
}
]
}
- Create a IAM policy
ArkimeViewerPolicy
- Arkime will expire S3 objects based on s3ExpireDays. Please allow s3:DeleteObject action.
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::example-arkime-pcap/*"
]
}
]
}
- Create two IAM users and download their access keys
ArkimeCapture
- used on capture nodes. Attach the newly createdArkimeCapturePolicy
.ArkimeViewer
- used on viewer nodes. Attach the newly createdArkimeViewerPolicy
.
- On the capture nodes you’ll want a config file with the following settings
plugins=writer-s3.so;#OTHER PLUGINS
pcapWriteMethod=s3
s3Region=us-east-1
s3Bucket=example-arkime-pcap
s3AccessKeyId=THEACESSKEYID_FOR_ARKIMECAPTURE
s3SecretAccessKey=THESECRETACCESSKEY_FOR_ARKIMECAPTURE
- On the viewer node you’ll need to set the following config. Remember this can be a central viewer node.
pcapWriteMethod=s3
viewerPlugins=writer-s3/index.js
s3AccessKeyId=THEACESSKEYID_FOR_ARKIMEVIEWER
s3SecretAccessKey=THESECRETACCESSKEY_FOR_ARKIMEVIEWER
s3ExpireDays=10
Use s3AccessKeyId
and s3SecretAccessKey
settings only if absolutely necessary, i.e. if the machine using the config is not running in EC2, or is running Arkime before 2.1.0. Storing security credentials in EC2 instances is not best practice; instead, use IAM Role based authentication.