Installing Network Sensor
The network sensor is installed in the user's internal network or AWS (instance) to collect information and transmit it to the policy server. Depending on the network configuration, one or more logical/physical network sensors may need to be installed.
Hardware Preparation
You can install the network sensor on physical systems or virtual systems.
Physical Equipment
You can use general Intel servers like HP, Dell, or Mini PCs for testing and small-scale deployments.
Virtual Machines
The network sensor can also be installed on virtual machines, supporting various hypervisors.
Network Connection Preparation
Genian ZTNA requires network connectivity with one or more static IP addresses.
The network sensor must monitor broadcast packets (e.g., ARP, DHCP, uPNP) on the network and connect to all segments (broadcast domains) you want to manage.
If there is a switch configured with VLANs, you can set up an 802.1Q trunk port to monitor multiple networks with a single physical interface.
In virtual environments, the VM (sensor) must be able to communicate directly with all segments it is monitoring and controlling.
Note
If using a virtual machine, you must select the Bridge mode for the network interface type.
To collect wireless LAN information with the network sensor, install a compatible wireless network adapter. Refer to the document below for details:
Access port
Monitoring a Single Network via a Switch Access Port When monitoring a single network through a switch's access port, no additional configuration on the switch is required. If the system where the network sensor is installed has more than one NIC, multiple segments can be monitored via access ports.
Trunk Port
To monitor multiple VLANs on a single interface, you need to configure the switch port as a Trunk Port using the 802.1Q protocol. Below are examples of configuring a trunk port with the 802.1Q protocol on Cisco and HP switches.
Cisco switch Configuration Example
Cisco(config)#interface gi1/0/48
Cisco(config-if)#switchport trunk encapsulation dot1q
Cisco(config-if)#switchport mode trunk
HP switch Configuration Example (Create Port 48 as a Tagged Interface)
Procurve(config)#vlan 100
Procurve(config)#tagged 48
Procurve(config)#vlan 200
Procurve(config)#tagged 48
Note
To configure VLAN interfaces on a trunk interface, please refer to the relevant document. Multi-VLAN Configuration
ZTNA Nerwork Sensor Installation
The ZTNA Network Sensor can be installed in a standalone or integrated configuration.
- Integrated Configuration : ZTNA Policy Center and ZTNA Network Sensor are installed on the same Ubuntu OS device.
- Standalone Configuration : ZTNA Policy Center and ZTNA Network Sensor are installed on separate Ubuntu OS devices.
Integrated Configuration
Access the Ubuntu device where the ZTNA Policy Center is installed.
$ sudo su # Switch to the root account in Ubuntu.
$ cd /usr/geni/conf # Navigate to the conf directory to edit the genian.conf file.
$ vim genian.conf # Open the genian.conf file with a text editor for editing.
Modify the value of `DKNS_ENABLED` on line 5 from `no` to `yes`.
$ cd /usr/geni
$ ./compoese.sh start # Use the compose command to install the Network Sensor.
$ ./compose.sh start dkns # Use the compose command to start the Network Sensor.
$ ./compose.sh ps # Verify that the Docker container is running correctly with the compose command.
Access the Web UI, navigate to [System] -> [System Management], check the unapproved sensor, and click [Select Action] -> [Approve Unapproved Sensor] to approve it.
- If prompted with the keyword If you are sure to continue, Enter ‘kernel update’, enter kernel update to proceed.
Standalone Configuration (Installing Sensor in On-Premise Environment)
- Installing a Universal OS to first install the Ubuntu OS.
- Configuring Token-Based Access to the Policy Server and, if using a Sensor Installation Token, input the token value into DKNS_SERVER_TOKEN.
$ sudo su # Switch to the root account on Ubuntu.
$ apt-get update # Perform a package update.
$ apt install curl # Install curl.
$ curl -s https://docs.genians.com/install/ztna-sensor.sh | sudo DKNS_SERVER_TOKEN= BRANCH= bash -s - [POLICY SERVER IP] # Install the ZTNA Network Sensor using the command.
$ cd /usr/geni # Navigate to /usr/geni.
$ ./compose.sh ps # Check if the Docker container is running properly using the compose command (ensure the State value is 'up').
Access the Web UI, go to [System] -> [System Management], check the added unapproved sensor, and click [Select Action] -> [Approve Unapproved Sensor] to approve it.
Standalone Configuration (Cloud-Managed (AWS) Environment) - Manual Installation via CLI
- Instance Creation to first create the instance where the ZTNA Network Sensor will be installed.
- Token-based Policy Server Connection Settings, and if using the Sensor Installation Token, enter the token value in DKNS_SERVER_TOKEN.
- After installing the instance, connect to the instance using an SSH client.
$ sudo su # Switch to the root account.
$ curl -s https://docs.genians.com/install/ztna-sensor.sh | sudo DKNS_SERVER_TOKEN= BRANCH= bash -s - [POLICY SERVER IP] # Install the ZTNA Network Sensor using the command.
If a kernel downgrade is required, the installation proceeds after the downgrade. Once the downgrade is completed, the system will reboot.
$ sudo su # Switch to the root account again.
$ cd /usr/geni # Navigate to the directory to use the compose script.
$ ./compose.sh restart dkns # Enter the command to restart the sensor.
$ ./compose.sh ps # Verify that the Docker Container is running correctly using the compose command (ensure the State value is 'up').
Access the Web UI, click on [System] -> [System Management], check the newly added unapproved sensor, and approve it by selecting [Select Action] -> [Approve Unapproved Sensor].
Standalone Deployment (Cloud-Managed (AWS) Environment Sensor Installation) - Automatic Installation via Web UI
1. Access the Web UI console. [ https:// (ZTNA Policy Server IP):8443/ ]
2. Click System -> Cloud Provider Management in the top menu.
3. Click Select Action -> Create. Configure the settings based on the table below.
Setting Name | Name of the Cloud provider |
---|---|
Cloud | AWS |
Access Key | Issued Access key ID |
Secret Key | Issued Secret Access key |
Note
The Access Key can be generated by referring to the AWS Access Key Generation Method.
4. Click on System -> Sites in the left menu.
5. Click Select Action -> Create and configure it as shown in the table below:
Site Name | Provide the name for the site |
---|---|
Type | Hub/Branch |
Infrastructure | Cloud |
Cloud Provider | The Cloud Provider created in step 3 |
Region | Region of the instance where the Policy Center is created |
VPC ID | The VPC used when setting up the Policy Center |
- The site settings cannot be modified after creation.
6. Click on System -> System Management.
7. Click Action Select -> Add Cloud Sensor and configure the settings based on the table below:
Site Name | Specify the site name created in step 5 |
---|---|
AMI | Automatically set when the site name is configured |
Instance Type | Choose the instance type (t2.medium or higher recommended) |
Size | Specify the disk size of the instance (64GB or higher recommended) |
Subnet ID | Automatically set when the site name is configured |
Key pair | Set the Key pair for SSH connection to the sensor instance |
8. Click Check init. Once the Check init is complete, click Create.
9. Go to the EC2 dashboard and verify that the sensor instance has been created.
10. Access the Web UI, click on [System] -> [System Management], check the newly added unapproved sensor, then click [Action Select] -> [Approve Unapproved Sensor] to approve it.
Standalone Configuration (Cloud-Managed (Linode) Environment Sensor Installation) - Automatic Installation through Web UI
1. Access the Web UI console. [ https:// (ZTNA Policy Server IP):8443/ ]
2. In the top menu, click System -> Cloud Provider Management.
3. Click Action Select -> Create. Configure as follows based on the table below:
Setting Name | Name of the Cloud provider |
---|---|
Cloud | Linode |
Token | The issued token |
Note
The credential information for Linode can be created by referring to the Managing Nodes in the Cloud documentation.
4. In the left menu, click System -> Sites.
5. Click Action Select -> Create. Configure as follows based on the table below:
Site Name | Assign a name for the site |
---|---|
Type | Hub/Branch |
Infrastructure | Cloud |
Cloud Provider | The Cloud Provider created in step 3 |
Region | The region of the instance where the Policy Center is created |
VPC ID | The VPC used during the Policy Center setup |
6. Click System -> System Management.
7. Click Action Select -> Add ZTNA Gateway. Configure as follows based on the table below:
Site Name | Specify the site name created in step 5 |
---|---|
Image | Choose Ubuntu 20.04 or Ubuntu 24.04 |
Instance Type | Select instance type (g6-standard-1 or higher recommended) |
Size | Specify the disk size for the instance (512MB or higher recommended) |
Root Pass | Enter the initial password for the user account |
Key Pair | Configure the key pair for SSH connection to the sensor instance |
Subnet ID | Automatically configured when setting the site name |
8. Click Check init. Once Check init is completed, click Create.
9. Access the Linode Console and verify that the instance and associated resources have been created.
10. Access the Web UI, click [System] -> [System Management], check the newly added unapproved sensors, and then click [Action Select] -> [Approve Unapproved Sensor] to approve them.
Standalone Configuration (Cloud-Managed (OCI) Environment Sensor Installation) - Automatic Installation through Web UI
1. Access the Web UI console. [ https:// (ZTNA Policy Server IP):8443/ ]
2. In the top menu, click System -> Cloud Provider Management.
3. Click Action Select -> Create. Configure as follows based on the table below:
- Setting name: Provide the name of the cloud provider.
- Cloud: Select OCI.
- Tenancy OCID issuance application: Enter the Tenancy OCID.
- User OCID: Enter the user OCID.
- Fingerprint: Enter the fingerprint to issue.
- Private Key: Upload the Private Key for issuance.
- Region: Select the target region for application.
Note
The credential information for OCI can be created by referring to the Managing Nodes in the Cloud documentation.
4. In the left menu, click System -> Sites.
5. Click Action Select -> Create. Configure as follows based on the table below:
- Site Name: Provide a name for the site to be set.
- Type: Select Hub/Branch.
- Infrastructure: Select Cloud.
- Cloud Provider: Select the Cloud Provider created in step 3.
- Region: Select the Region of the instance where Policy Center was created.
- VPC ID: Select the VPC used when building Policy Center.
6. Click System -> System Management.
7. Click Action Select -> Add ZTNA Gateway. Configure as follows based on the table below:
- Site Name: Specify the site name created in step 5.
- Domin: Select the availability domain where the instance is running.
- VM Size: Select the instance type (at least VM.Standard.E2.1.Micro recommended).
- Subnet: Automatically set when setting the site name.
- Key Pair: Upload the public key to be used when connecting to the sensor instance with SSH.
8. Click Check init. Once Check init is completed, click Create.
9. Access the OCI Console and verify that the instance and associated resources have been created.
10. Access the Web UI, click [System] -> [System Management], check the newly added unapproved sensors, and then click [Action Select] -> [Approve Unapproved Sensor] to approve them.
Standalone Configuration (Cloud-Managed (OCI) Environment Sensor Installation) - Automatic Installation through Web UI
1. Access the Web UI console. [ https:// (ZTNA Policy Server IP):8443/ ]
2. In the top menu, click System -> Cloud Provider Management.
3. Click Action Select -> Create. Configure as follows based on the table below:
- Setting name: Provide the name of the cloud provider.
- Cloud: Select OCI.
- Tenancy OCID issuance application: Enter the Tenancy OCID.
- User OCID: Enter the user OCID.
- Fingerprint: Enter the fingerprint to issue.
- Private Key: Upload the Private Key for issuance.
- Region: Select the target region for application.
Note
The credential information for OCI can be created by referring to the Managing Nodes in the Cloud documentation.
4. In the left menu, click System -> Sites.
5. Click Action Select -> Create. Configure as follows based on the table below:
- Site Name: Provide a name for the site to be set.
- Type: Select Hub/Branch.
- Infrastructure: Select Cloud.
- Cloud Provider: Select the Cloud Provider created in step 3.
- Region: Select the Region of the instance where Policy Center was created.
- VPC ID: Select the VPC used when building Policy Center.
6. Click System -> System Management.
7. Click Action Select -> Add ZTNA Gateway. Configure as follows based on the table below:
- Site Name: Specify the site name created in step 5.
- Domin: Select the availability domain where the instance is running.
- VM Size: Select the instance type (at least VM.Standard.E2.1.Micro recommended).
- Image: Select Ubuntu 20.04 or Ubuntu 24.04.
- Subnet: Automatically set when setting the site name.
- Key Pair: Upload the public key to be used when connecting to the sensor instance with SSH.
8. Click Check init. Once Check init is completed, click Create.
9. Access the OCI Console and verify that the instance and associated resources have been created.
10. Access the Web UI, click [System] -> [System Management], check the newly added unapproved sensors, and then click [Action Select] -> [Approve Unapproved Sensor] to approve them.
How to Check Docker Container
After installing the ZTNA Network Sensor, it is necessary to verify that the containers are functioning properly. To check, use the compose.sh script.
$ cd /usr/geni # Change to the directory to use the compose.sh script.
$ ./compose.sh ps # Use this command to check the status of the containers.
Check that the State field shows 'Up'.
Refer to the table below to confirm that each container is operating correctly:
Type | Container Name |
---|---|
ZTNA Network Sensor | geni_dkns_1 |
ZTNA Policy Center | geni_nac_1 |
DB Server | geni_dbserver_1 |
Log Server | geni_logserver_1 |
Log Collector | geni_filebeat_1 |
Update Agent | geni_gnupdateinfo_1 |