Advanced Configuration of Server Images
This section covers the required information that you need to inject your own configuration to the server image.
Before You Begin
This document serves as a guideline for custom image creation. In order to have your own customized server image, you need to have advanced knowledge of Packer, Docker, Nomad, and Linux.
Creating Custom Server Image
Your custom server image needs to be created based on our current server image. Also, it needs to follow our naming scheme. The image name should match the expectation of the terraform scripts when creating a cluster.
AWS
The naming scheme in AWS is as follows:
genvidtech-server-{{user `version` | clean_resource_name}}-{{isotime \"20060102-150405\" | clean_resource_name}}
So your image name needs to have 3 main sections:
It should start with
[prefix]-server-
. By default, theprefix
value for server ami isgenvidtech
. But you can have your own value. The prefix is equivalent to the variableserver_ami_prefix
.Then you need to add a version. The version needs to fit a pattern of type
Major(int).Minor(int).Patch(int).Build(int)[.label(string)]
. (having label is optional) For example, your version can be1.25.0.156
or1.25.0.185.test
.After the version, you need the date and time. It should follow this format:
-yyyymmdd-hhmmss
.
As an example, your image name should finally be like this: genvidtech-server-1.26.0.236-20210227-005755
if you are using genvidtech
as a prefix.
Note that our terraform code selects the most recent image that fits the regex.
Azure
The naming scheme in Azure is as follows:
"capture_container_name": "genvidtech-server"
"capture_name_prefix": "{{user `version`}}"
"storage_account": "{{user `storage_account_name`}}"
"location": "{{user `azure_location`}}"
"resource_group_name": "{{user `resource_group_name`}}"
In Azure, you need to build your custom .vhd using Packer and keep it in your storage account. You must provide the resource group name as well as storage account.
IMPORTANT NOTE: If you build a custom server image, you should build the game as well because our toolings (genvid-azure-image create-images
)
expects both the server and game images to be in the same storage account and container.
Then, you should follow these two rules:
The server image name should be
[prefix]-server
. It is calledcapture_container_name
in Packer. By default, theprefix
value for server image isgenvidtech
. But you can have your own value. The prefix is equivalent to the variableserver_iamge_prefix
.As for the version, it is called
capture_container_name
in Packer and you can choose any string for that.
Since you need to provide game image as well, make sure that the name of game image is [prefix]-wingame
. The prefix is equivalent to the variable wingame_image_prefix
.
Also, you need to have the same version as your server image. For more information on Azure game naming scheme, please check:
"capture_container_name": "genvidtech-wingame"
"capture_name_prefix": "{{user `version`}}"
"storage_account": "{{user `storage_account_name`}}"
"resource_group_name": "{{user `resource_group_name`}}"
Configuring Nomad
You do not need to configure Nomad from scratch. In our implementation, we consider having external Nomad configuration file(s) under:
/etc/nomad.d
All you need is to store your required configuration as a JSON file or hcl file in this directory.
Example
In Genvid MILE SDK 1.24 and above, Nomad is unable to mount the host system into docker containers [1]. Instead, it requires leveraging the CSI drivers. Here is how you can enable it using advanced server image customization.
To leverage CSI drivers, you first need to add “option” in the Nomad Client configuration to enable using your own image. The example of required configuration is as follows:
{
"client": [
{
"options": [
{
"docker.volumes.enabled": "true"
}
]
}
]
}
See also
https://www.nomadproject.io/docs/drivers/docker for more information on how to use Docker driver in Nomad
https://www.nomadproject.io/docs/configuration/client#host_volume-stanza for more information on Nomad host volumes
https://www.nomadproject.io/docs/configuration#load-order-and-merging for more information on Nomad multiple configuration files