TECH

December 8, 2025

How to Create and Manage Translation Files

If you come from a web development background, seeing a .ts file extension might immediately make you think of TypeScript. However, in the world of C++ and the Qt Framework, .ts stands for Translation Source.

If you want your application to reach a global audience, you cannot hard-code your strings in just one language. You need Internationalization (i18n).

In this guide, we will walk you through the entire workflow of creating and managing Qt translation files, taking your app from a single language to a multilingual powerhouse.

Prepare Your Code (Marking Strings)

Before generating any files, you must tell Qt which strings in your application need to be translated. Qt doesn't guess; it looks for specific markers.

For C++ Files (.cpp, .h)

// Bad: Hard-coded

QString text = "Hello World";

// Good: Translatable

QString text = QObject::tr("Hello World");

For QML Files (.qml)

Use the qsTr() function.

Text {
    // Bad
    text: "Hello World"

    // Good
    text: qsTr("Hello World")
}

Configure the Project File

Next, you need to define where the translation files will be stored. This step differs slightly depending on your build system.

Using qmake (.pro)

Add the TRANSLATIONS variable to your project file. This tells Qt what target languages you plan to support (e.g., Vietnamese and Japanese).

# MyProject.pro

TRANSLATIONS += languages/app_vi.ts \ languages/app_ja.ts

Using CMake (CMakeLists.txt)

If you are using Qt 6 and CMake, the setup is slightly more modern using qt_add_translations:

# CMakeLists.txt

find_package(Qt6 6.5 REQUIRED COMPONENTS Quick LinguistTools)

qt_add_translations(appMyProject
    TS_FILES
    languages/app_vi.ts
    languages/app_ja.ts
)

Generate the .ts Files (The lupdate Step) 

This is the core of our tutorial. You don't create .ts files manually; you generate them. The tool lupdate scans your C++ and QML source code, finds every string wrapped in tr() or qsTr(), and extracts them into an XML format.

Via Qt Creator (Only for qmake)

  1. Open your project in Qt Creator.
  2. Go to the menu bar: Tools > External > Linguist > Update Translations (lupdate).
  3. Qt Creator will scan your code and create the .ts files in your project directory.

    Via Command Line (Terminal)

    Navigate to your project folder and run:

    # For qmake users
    \path\to\Qt\6.8.3\msvc2022_64\bin\lupdate MyProject.pro

    # For CMake users, you usually build the 'update_translations' target

    rmdir /s /q build

    cmake -S . -B build -DCMAKE_PREFIX_PATH="\path\to\Qt\6.8.3\msvc2022_64"

    cmake --build build --target update_translations

    Translate with Qt Linguist

    Now that you have the .ts files, it’s time to translate.

    1. Open the file (e.g., app_vi.ts) using Qt Linguist (installed with Qt).

    2. On the left, you will see a list of strings found in your code.
    3. Select a string, type the translation in the bottom pane, and mark it as "Done" (click the ? icon to turn it into a green checkmark).



    4. Save the file.

      Compile to Binary (.qm Files)

      Your application does not read .ts files directly because they are text-based (XML) and slow to parse. You must compile them into compact binary files (.qm) .

      Using qmake (.pro)

      In Qt Creator: Go to Tools > External > Linguist > Release Translations (lrelease).

      This will generate app_vi.qm and app_ja.qm. These are the files you will actually deploy with your app.

      Using CMake (CMakeLists.txt)

      Navigate to your project folder and run:

      # For CMake users

      cmake --build build --target release_translations

      Load the Translation in Your App

      Finally, you need to tell your application to load the generated .qm file when it starts.

      Add this logic to your code:

      QTranslator translator;
      // Load the compiled binary translation file
      // ideally from the resource system (:/)
      if (translator.load(":/app_vi.qm")) {
           app.installTranslator(&translator);
      }

      Conclution

      Internationalization (i18n) might seem like a daunting task when you are just starting out, but Qt provides one of the most robust workflows in the C++ ecosystem to handle it.

      By following this guide, you have moved away from hard-coding strings and adopted a professional workflow:

      1. Marking your code with tr().
      2. Automating extraction with lupdate.
      3. Compiling efficient binaries with lrelease.

        Ready to get started?

        Contact IVC for a free consultation and discover how we can help your business grow online.

        Contact IVC for a Free Consultation
        View More
        TECH

        December 8, 2025

        Guide to Creating Integration Tests for Terraform Code

        Welcome to this article! If you are working with Terraform to manage cloud infrastructure such as AWS, writing integration tests is an important step to verify that your code works correctly with the real provider (for example, creating actual resources on the cloud). In this blog, I will guide you through creating integration tests (real-world application to check actual resources) for a simple Terraform source code: provisioning an EC2 instance via a module. We will use the Terraform Test feature (available from Terraform 1.6 onwards) to apply real code, validate outputs, and automatically destroy resources afterwards.

        Project Structure

        The source code we are working with follows a basic structure:

        • main.tf: The main file that calls the module and sets up the provider.
        • modules/ec2_instance/: The module that provisions the EC2 instance.
        • tests/integration.tftest.hcl: The integration test file.

        1. Set Up Basic Terraform Source Code

        First, create the project folder structure and files.

        1.1 Create main.tf

        This file configures the AWS provider and calls the EC2 module.

        terraform {
          required_providers {
            aws = {
              source  = "hashicorp/aws"
              version = "~> 5.0"
            }
          }
        }
        
        provider "aws" {
          region = "us-east-1"
        }
        
        module "web_server" {
          source = "./modules/ec2_instance"  # Adjust path if needed
        
          env_name      = "dev"
          instance_type = "t3.micro"
          ami_id        = "ami-0fa3fe0fa7920f68e"  # Replace with a valid AMI ID for your region
        }
        
        output "server_ip" {
          value = module.web_server.public_ip
        }

         

        1.2 Create the ec2_instance Module

        Inside modules/ec2_instance, add the following files:

        main.tf

        resource "aws_instance" "web" {
          ami           = var.ami_id
          instance_type = var.instance_type
        
          tags = {
            Name = "web-server-${var.env_name}"
          }
        }
        
        output "public_ip" {
          value = aws_instance.web.public_ip
        }
        
        output "instance_type" {
          value = aws_instance.web.instance_type
        }
        
        output "ami" {
          value = aws_instance.web.ami
        }
        
        output "instance_state" {
          value = aws_instance.web.instance_state
        }

         

        variables.tf

        variable "env_name" {
          description = "Environment name"
          type        = string
        }
        
        variable "instance_type" {
          description = "EC2 instance type"
          type        = string
        }
        
        variable "ami_id" {
          description = "AMI ID for the EC2 instance"
          type        = string
        }

         

        Note: We add outputs such as instance_type, ami, and instance_state to make it easier to validate in integration tests (since internal resources are not directly exposed, and state is a computed value from AWS).

        2. Introduction to Terraform Test

        Terraform Test is an integrated framework for writing tests for Terraform code. For integration tests, it allows:

        • Real apply: Create actual resources on the cloud and validate them, with automatic destroy after the test finishes.
        • Run blocks: Execute commands such as apply or plan in a test environment.
        • Assertions: Validate conditions on outputs or resources.

        To run tests, you need Terraform >= 1.6. Run terraform test from the project root. Tests will automatically run all .tftest.hcl files inside the tests/ folder.

        3. Write Integration Test (Real Apply)

        Integration tests apply real code to AWS, create actual resources, validate them, and destroy them afterwards. This helps verify that the code works correctly with the real provider.

        Important notes:

        • You need valid AWS credentials (e.g., via environment variables such as AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY).
        • This test will create (and destroy) real resources, so it may incur small costs (EC2 free tier may be free).

        Create a tests folder and add integration.tftest.hcl.

        run "integration_test_web_server" {
          command = apply
        
          assert {
            condition     = module.web_server.public_ip != null && module.web_server.public_ip != ""
            error_message = "Public IP should not be null or empty after apply."
          }
        
          assert {
            condition     = module.web_server.instance_type == "t3.micro"
            error_message = "Instance type does not match expected value."
          }
        
          assert {
            condition     = module.web_server.ami == "ami-0fa3fe0fa7920f68e"
            error_message = "AMI ID does not match expected value."
          }
        
          assert {
            condition     = module.web_server.instance_state == "running"
            error_message = "EC2 instance should be in running state."
          }
        }
        

         

        Explanation of the Integration Test File

        • run block: Executes apply with the real provider (no mock). Terraform will create an actual EC2 instance.
        • Assertions: Validate outputs (e.g., public_ip is not null, since now it’s a real value from AWS). Assertion on instance_state verifies the instance is running (a computed value from AWS).
        • Automatic cleanup: After the test finishes (pass or fail), Terraform will destroy the resource to avoid leftovers.

        To run the integration test:

        terraform test -filter=integration.tftest.hcl

        Or run all tests:

        terraform test

        4. Run Integration Test

        Run the command from the project root:

        taipham@Tais terraform2 % terraform test
        tests/intergration.tftest.hcl... in progress
          run "integration_test_web_server"... pass
        tests/intergration.tftest.hcl... tearing down
        tests/intergration.tftest.hcl... pass
        tests/unit.tftest.hcl... in progress
          run "test_web_server_intance_type"... pass
          run "test_web_server_module_ami"... pass
          run "test_web_server_module_ami_ip"... pass
        tests/unit.tftest.hcl... tearing down
        tests/unit.tftest.hcl... pass
        
        Success! 4 passed, 0 failed.

         

        5. Best Practices for Integration Tests in Terraform

        • Limit usage: Use integration tests to verify real behavior, but limit them to avoid high costs (run in CI/CD with a dev account).
        • Meaningful assertions: Validate inputs, outputs, tags, and resource states (such as "running").
        • Integrate with CI/CD: Run terraform test in pipelines (e.g., GitHub Actions) for automation.
        • Advanced usage: Use expect_failures for negative tests or test multiple scenarios via variables.
        • Combine with unit tests: Add unit tests with mocks for faster checks; here we focus on integration.

        Conclusion

        With integration tests, you can verify that your Terraform code works correctly in real environments without leaving leftover resources. In this example, we tested an EC2 instance with minimal cost.

        Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

        [References]

        https://developer.hashicorp.com/terraform
        https://www.sudeepa.com/?p=382 [Image link]

        View More
        TECH

        December 8, 2025

        Guide to Creating Unit Tests for Terraform Code

        If you are working with Terraform to manage cloud infrastructure such as AWS, writing unit tests is an important step to ensure your code works as expected without deploying real resources. In this blog, I will guide you through creating unit tests for a simple Terraform source code: provisioning an EC2 instance via a module. We will use the Terraform Test feature (available from Terraform 1.6 onwards) to mock providers and validate outputs.

        Project Structure

        The source code we are working with follows a basic structure:

        • main.tf: The main file that calls the module and sets up the provider.
        • modules/ec2_instance/: The module that provisions the EC2 instance.
        • tests/unit.tftest.hcl: The test file with mocks and assertions.

        1. Set Up Basic Terraform Source Code.

        First, create the project folder structure. 

        1.1 Create main.tf.

        This file configures the AWS provider and calls the EC2 module:

        terraform {
          required_providers {
            aws = {
              source  = "hashicorp/aws"
              version = "~> 5.0"
            }
          }
        }
        
        provider "aws" {
          region = "us-east-1"
        }
        
        module "web_server" {
          source = "./modules/ec2_instance"  # Adjust path if needed
        
          env_name      = "dev"
          instance_type = "t3.micro"
          ami_id        = "ami-0fa3fe0fa7920f68e"  # Replace with a valid AMI ID for your region
        }
        
        output "server_ip" {
          value = module.web_server.public_ip
        }

         

        1.2 Create the ec2_instance Module.

        Inside modules/ec2_instance, add the following files:

        main.tf

        resource "aws_instance" "web" {
          ami           = var.ami_id
          instance_type = var.instance_type
        
          tags = {
            Name = "web-server-${var.env_name}"
          }
        }
        
        output "public_ip" {
          value = aws_instance.web.public_ip
        }
        
        output "instance_type" {
          value = aws_instance.web.instance_type
        }
        
        output "ami" {
          value = aws_instance.web.ami
        }

         

        variables.tf

        
        variable "env_name" {
          description = "Environment name"
          type        = string
        }
        
        variable "instance_type" {
          description = "EC2 instance type"
          type        = string
        }
        
        variable "ami_id" {
          description = "AMI ID for the EC2 instance"
          type        = string
        }

         

        2. Introduction to Terraform Test.

        Terraform Test is an integrated framework for writing tests for Terraform code. It allows you to:

        • Mock providers: Simulate providers (like AWS) to avoid creating real resources, saving cost and time.
        • Run blocks: Execute commands such as apply or plan in a test environment.
        • Assertions: Validate conditions on outputs or resources.

        To run tests, you need Terraform >= 1.6. Run terraform test from the project root.

        3. Write Unit Tests

        Create a tests folder in the project root and add unit.tftest.hcl.

        mock_provider "aws" {
          alias = "mock"
        
          mock_resource "aws_instance" {
            defaults = {
              id         = "i-1234567890abcdef0"
              public_ip  = "192.0.2.1"
              private_ip = "10.0.0.1"
              arn        = "arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0"
            }
          }
        }
        
        run "test_web_server_instance_type" {
          command = apply
        
          providers = {
            aws = aws.mock
          }
        
          assert {
            condition     = module.web_server.instance_type == "t3.micro"
            error_message = "Instance type does not match expected value."
          }
        }
        
        run "test_web_server_module_ami" {
          command = apply
        
          providers = {
            aws = aws.mock
          }
        
          assert {
            condition     = output.server_ip == "192.0.2.1"
            error_message = "The public IP output does not match the expected mocked value."
          }
        }
        
        run "test_web_server_module_ami_ip" {
          command = apply
        
          providers = {
            aws = aws.mock
          }
        
          assert {
            condition     = output.server_ip == "192.0.2.1"
            error_message = "The public IP output does not match the expected mocked value."
          }
        }

         

        Explanation of the Test File

        • mock_provider: Simulates the AWS provider. We mock the aws_instance resource with default values (like public_ip) so Terraform does not call real AWS APIs.
        • run block: Executes apply with the mock provider. No variables are needed since they are hardcoded in main.tf.
        • assert: Validates outputs from the module and root configuration. If the condition fails, the test fails with the given error message.

        4. Run Unit Tests

        The result after running the command terraform test.

        taipham@Tais terraform2 % terraform test
        tests/unit.tftest.hcl... in progress
          run "test_web_server_intance_type"... pass
          run "test_web_server_module_ami"... pass
          run "test_web_server_module_ami_ip"... pass
        tests/unit.tftest.hcl... tearing down
        tests/unit.tftest.hcl... pass
        
        Success! 3 passed, 0 failed.

         

        5. Best Practices for Unit Tests in Terraform

        • Isolate tests: Test modules individually if possible (place test files inside the module folder).
        • Mock only what’s necessary: Keep mocks simple to avoid complexity.
        • Meaningful assertions: Validate inputs, outputs, and tags to ensure correctness.
        • Integrate with CI/CD: Run terraform test in pipelines (e.g., GitHub Actions) for automation.
        • Advanced usage: Use expect_failures for negative tests or test multiple scenarios with variables.

        Conclusion

        With unit tests, you can be confident that your Terraform code works correctly before applying it to real infrastructure. In this example, we tested an EC2 instance without incurring AWS costs.

        Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

        [References]

        https://developer.hashicorp.com/terraform
        https://www.sudeepa.com/?p=382 [Image link]

        View More
        TECH

        December 8, 2025

        Automation testing with Cursor AI

        In late October 2025, Cursor released a new feature called Browser. The browser is now embedded within the editor, featuring powerful new tools for component selection, full developer tools, and MCP controls for agents. Agent in Cursor can use web browser to test web site, audit accessibility, convert designs into code, and more. Automated testing is one of Use cases that we will discuss in this topic.

        1. Context

        I have a Sign In form and Forgot Password form and I want to create automation test follow case:

        • Fill out forms with test data.
        • click through workflows.
        • test responsive designs.
        • validate error messages.
        • and monitor console for JavaScript errors.

        Forgot Password?

        Previously, we were required to write test code using frameworks such as Selenium, which made the process of developing automation tests significantly time-consuming. But now with Cursor, we can approach automation testing in a much simpler way.

        2. Automation testing with Cursor AI

        Agent (Cursor AI) can execute comprehensive test suites and capture screenshots for visual regression testing.

        To create automation test follow above request,  Simply put, I just need to write a prompt like this:

        @browser Fill out forms with test data, click through workflows, test responsive designs, validate error messages, and monitor console for JavaScript errors

         

        cursor test

        You will see Cursor's testing progress, on the right side and the test will be running on the browser.

        test process

        And testing report as below

        test report

        3. Security

        The browser runs as a secure web view and is controlled by an MCP server running as an extension. Multiple layers of protection safeguard you against unauthorized access and malicious activity. Cursor's Browser integrations have also been audited by numerous external security experts. Detail

        4. Conclude

        Although using Cursor AI for automation testing takes less time than writing code, but we still need to consider the cost of each AI test run (including re-runs, screen count, models, etc.).

         

        Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

         

        Refer 

        https://cursor.com/docs/agent/browser#automated-testing

        View More
        TECH

        December 3, 2025

        Real-world Terraform Project Structure

        I. Introduction.

        Separating directories for different environments (Dev, Staging, Production) is a mandatory standard in real-world projects to ensure security and manageability.

        The best approach is to use a modular model. We will refactor the previous EC2 creation code into a shared module (template), which the Dev and Prod environments will then call using different parameters.

        II. Terraform Project Structure.

        We will create a Terraform project with the following requirements:

        • Separate configurations for dev and prod environments.

        • Define shared variables for code reusability.

        • Store the State File in AWS S3.

        Below is the new directory structure and the implementation steps:

        The function of each folder is as follows:

        setup-backend: Configuration for the Terraform backend.

        modules: Stores reusable code modules.

        environments: Contains environment-specific configurations.

        File: setup-backend\main.tf

        The function of this code is to bootstrap the foundational infrastructure required for a Terraform Remote Backend. This allows the State file to be stored securely on the Cloud instead of a local machine, which is essential for team collaboration.

        # Configure the AWS Provider and set the region where resources will be created.
        provider "aws" {
          region = "us-east-1"
        }
        
        # Create an S3 bucket to store the 'terraform.tfstate' file.
        resource "aws_s3_bucket" "terraform_state" {
          bucket = "terraform-state-project-9999"
          lifecycle {
            prevent_destroy = true 
          }
        }
        
        # Enable versioning on the S3 bucket.
        # This is crucial for state recovery. It allows you to revert to an older 
        # state file if the current one gets corrupted or accidentally overwritten.
        resource "aws_s3_bucket_versioning" "enabled" {
          bucket = aws_s3_bucket.terraform_state.id
          versioning_configuration {
            status = "Enabled"
          }
        }
        
        # Create a DynamoDB table to handle state locking.
        # This prevents race conditions (e.g., two developers running 'apply' simultaneously).
        resource "aws_dynamodb_table" "terraform_locks" {
          name         = "terraform-locks"
          billing_mode = "PAY_PER_REQUEST"
          hash_key     = "LockID"
        
          attribute {
            name = "LockID"
            type = "S"
          }
        }
        
        # Output the name of the S3 bucket for easy reference later.
        output "bucket_name" {
          value = aws_s3_bucket.terraform_state.bucket
        }
        
        # Output the name of the DynamoDB table for easy reference later.
        output "dynamodb_table_name" {
          value = aws_dynamodb_table.terraform_locks.name
        }

         

        File: modules\ec2_instance\main.tf

        This code snippet creates a basic Web Server infrastructure on AWS. It consists of two main components: a "Virtual Firewall" (Security Group) and a "Virtual Server" (EC2 Instance) protected by that firewall.

        resource "aws_security_group" "sg" {
          name        = "${var.env_name}-sg"
          description = "Security Group for ${var.env_name}"
        
          ingress {
           ...
          }
          
          egress {
           ...
          }
        }
        
        resource "aws_instance" "app_server" {
          ami             = var.ami_id
          instance_type   = var.instance_type
          security_groups = [aws_security_group.sg.name]
        
          tags = {
            Name        = "${var.env_name}-Web-Server"
            Environment = var.env_name
          }
        }

         

        File: environments\dev\main.tf

        This code represents a complete Terraform configuration (Root Module) for the Development environment. It orchestrates various infrastructure components: from state file storage (Backend) to resource instantiation (Module), and finally displaying the results.

        # TERRAFORM CONFIGURATION
        terraform {
          # Define the providers required by this configuration.
          required_providers {
            aws = {
              source  = "hashicorp/aws"
              version = "~> 5.0"
            }
          }
          # Configure Terraform to store the state file in an S3 bucket instead of locally.
          backend "s3" {
            bucket         = "terraform-state-project-9999" 
            key            = "dev/terraform.tfstate" 
            region         = "us-east-1"
            dynamodb_table = "terraform-locks" 
            encrypt        = true
          }
        }
        
        # Configure the AWS Provider to deploy resources into the US East 1 region.
        provider "aws" {
          region = "us-east-1"
        }
        
        # Instantiate the 'web_server' module using code defined in a local directory.
        # This promotes code reusability and cleaner project structure.
        module "web_server" {
          # The relative path to the module source code.
          source = "../../modules/ec2_instance"
          # Pass input variables to the module to customize its behavior.
          env_name      = "dev"
          instance_type = "t3.micro"
          ami_id        = "ami-0fa3fe0fa7920f68e"
        }
        
        # Retrieve the public IP address from the module's outputs and display it.
        output "server_ip" {
          value = module.web_server.public_ip
        }

         

        File: environments\prod\main.tf

        This code represents a complete Terraform configuration (Root Module) for the Production environment. It orchestrates various infrastructure components: from state file storage (Backend) to resource instantiation (Module), and finally displaying the results.

        # TERRAFORM CONFIGURATION
        terraform {
          # Define the providers required by this configuration.
          required_providers {
            aws = {
              source  = "hashicorp/aws"
              version = "~> 5.0"
            }
          }
          # Configure Terraform to store the state file in an S3 bucket instead of locally.
          backend "s3" {
            bucket         = "terraform-state-project-9999" 
            key            = "prod/terraform.tfstate" 
            region         = "us-east-1"
            dynamodb_table = "terraform-locks" 
            encrypt        = true
          }
        }
        # Configure the AWS Provider to deploy resources into the US East 1 region.
        provider "aws" {
          region = "us-east-1"
        }
        
        # Instantiate the 'web_server' module using code defined in a local directory.
        # This promotes code reusability and cleaner project structure.
        module "web_server" {
          # The relative path to the module source code.
          source = "../../modules/ec2_instance"
          # Pass input variables to the module to customize its behavior.
          env_name      = "prod"
          instance_type = "t3.micro"
          ami_id        = "ami-0fa3fe0fa7920f68e"
        }
        # Retrieve the public IP address from the module's outputs and display it.
        output "server_ip" {
          value = module.web_server.public_ip
        }
        

         

        Full source code: https://github.com/ivc-phampbt/terraform-project

        III. Run the project to provision resources on the AWS Cloud.

        • Proceed to provision the Terraform backend resources.
        taipham@Tais setup-backend % pwd
        /Users/taipham/Desktop/projects/terraform/setup-backend
        taipham@Tais setup-backend % terraform init
        taipham@Tais setup-backend % terraform apply
        

         

        • Initializing the Development Environment (Proceed similarly for Production)
        taipham@Tais dev % pwd  
        /Users/taipham/Desktop/projects/terraform/environments/dev
        taipham@Tais dev % terraform init
        taipham@Tais dev % terraform apply
        

         

        • Result on AWS: Two EC2 instances were created for the production and development environments

        • Check the S3 bucket for the stored state file.

        Development:

        Production:

        Conclusion

        The project successfully transitioned to a professional IaC architecture by implementing the Module pattern and deploying a secure Remote Backend on AWS.

        Key Achievements:

        • High Reusability: Infrastructure logic was encapsulated into Terraform Modules, ensuring consistency between Dev and Prod environments.

        • Safety & Collaboration: Implemented a Remote Backend using S3 (for state storage) and DynamoDB (for state locking), ensuring secure team collaboration.

        • Separation: The project structure clearly isolates Dev/Prod environments, confirming that each state file is independently managed and verified.

        Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

        [References]

        https://developer.hashicorp.com/terraform
        https://www.sudeepa.com/?p=382 [Image link]

        View More
        TECH

        December 3, 2025

        What is Terraform?

        I. What is Terraform?

        Terraform is an Infrastructure as Code (IaC) tool by HashiCorp that allows you to manage infrastructure (such as virtual machines, networks, databases, etc.) by writing code instead of performing manual operations. You write configuration files—typically in HashiCorp Configuration Language (HCL)—and run Terraform commands to create, modify, or destroy resources across various cloud platforms (like AWS, Azure, GCP).

        II. Basic Terraform Workflow.

        The core Terraform workflow typically consists of 4 main steps:

        1. Write Configuration.

        • You create files with the .tf extension (e.g., main.tf, variables.tf).
        • In these files, you declare Providers (service providers, e.g., aws, azurerm) and Resources (infrastructure objects, e.g., EC2 instances, VPC networks).

        2. Initialization.

        • This command is run the first time within a Terraform project directory.
        • It downloads the necessary Providers so that Terraform can communicate with your cloud services.

        3. Planning.

        • This command reads the configuration files (.tf) and compares the desired state with the actual state of the current infrastructure (which is stored in the State File – terraform.tfstate).
        • It displays a detailed execution plan showing exactly what will be added, changed, or destroyed.

        4. Applying.

        • After you review and approve the plan, this command executes the changes on the cloud provider's actual infrastructure.
        • It updates the State File to reflect the new state of the infrastructure.

        III. Basic Terraform Commands.

        Command Purpose
        terraform init Initialize the working directory, download Providers.
        terraform validate Check the syntax of configuration files.
        terraform plan Show the plan of changes (what will happen).
        terraform apply Execute the plan, create/update resources.
        terraform destroy Destroy all resources managed by Terraform.
        terraform fmt Reformat configuration files to HCL standards.

         

        IV. State File.

         

        The State File is a critically important component of Terraform.

        • It records the current state of the infrastructure that Terraform has created and is managing.
        • It enables Terraform to determine the difference between your configuration (.tf files) and the actual state of resources in the cloud.
        • In a team environment, this file is typically stored remotely (e.g., in an S3 Bucket or Terraform Cloud) to avoid conflicts and ensure consistency.

        V. Simple Code Example: Creating an EC2 Instance on AWS.

        File: main.tf

        provider "aws" {
         region = "us-east-1"
        }
        
        resource "aws_instance" "hello" {
         ami           = "ami-0fa3fe0fa7920f68e"
         instance_type = "t3.micro"
         tags = {
           Name = "terraform1"
         }
        }

         

        - Run the terraform init command to initialize the working directory and download providers.

        taipham@Tais terraform % terraform init
        Initializing the backend...
        Initializing provider plugins...
        - Finding latest version of hashicorp/aws...
        - Installing hashicorp/aws v6.23.0...
        - Installed hashicorp/aws v6.23.0 (signed by HashiCorp)
        Terraform has created a lock file .terraform.lock.hcl to record the provider
        selections it made above. Include this file in your version control repository
        so that Terraform can guarantee to make the same selections by default when
        you run "terraform init" in the future.
        Terraform has been successfully initialized!
        You may now begin working with Terraform. Try running "terraform plan" to see
        any changes that are required for your infrastructure. All Terraform commands
        should now work.
        If you ever set or change modules or backend configuration for Terraform,
        rerun this command to reinitialize your working directory. If you forget, other
        commands will detect it and remind you to do so if necessary.

         

        - Run the terraform apply -auto-approve command to provision an EC2 instance named 'terraform1'.

        taipham@Tais terraform % terraform apply -auto-approve
        Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
        symbols:
          + create
        Terraform will perform the following actions:
          # aws_instance.hello will be created
          + resource "aws_instance" "hello" {
              + ami                                  = "ami-0fa3fe0fa7920f68e"
              + arn                                  = (known after apply)
              + associate_public_ip_address          = (known after apply)
              + availability_zone                    = (known after apply)
              + disable_api_stop                     = (known after apply)
              + disable_api_termination              = (known after apply)
              + ebs_optimized                        = (known after apply)
              + enable_primary_ipv6                  = (known after apply)
              + force_destroy                        = false
              + get_password_data                    = false
              + host_id                              = (known after apply)
              + host_resource_group_arn              = (known after apply)
              + iam_instance_profile                 = (known after apply)
              + id                                   = (known after apply)
              + instance_initiated_shutdown_behavior = (known after apply)
              + instance_lifecycle                   = (known after apply)
              + instance_state                       = (known after apply)
              + instance_type                        = "t3.micro"
              + ipv6_address_count                   = (known after apply)
              + ipv6_addresses                       = (known after apply)
              + key_name                             = (known after apply)
              + monitoring                           = (known after apply)
              + outpost_arn                          = (known after apply)
              + password_data                        = (known after apply)
              + placement_group                      = (known after apply)
              + placement_group_id                   = (known after apply)
              + placement_partition_number           = (known after apply)
              + primary_network_interface_id         = (known after apply)
              + private_dns                          = (known after apply)
              + private_ip                           = (known after apply)
              + public_dns                           = (known after apply)
              + public_ip                            = (known after apply)
              + region                               = "us-east-1"
              + secondary_private_ips                = (known after apply)
              + security_groups                      = (known after apply)
              + source_dest_check                    = true
              + spot_instance_request_id             = (known after apply)
              + subnet_id                            = (known after apply)
              + tags                                 = {
                  + "Name" = "terraform1"
                }
              + tags_all                             = {
                  + "Name" = "terraform1"
                }
              + tenancy                              = (known after apply)
              + user_data_base64                     = (known after apply)
              + user_data_replace_on_change          = false
              + vpc_security_group_ids               = (known after apply)
              + capacity_reservation_specification (known after apply)
              + cpu_options (known after apply)
              + ebs_block_device (known after apply)
              + enclave_options (known after apply)
              + ephemeral_block_device (known after apply)
              + instance_market_options (known after apply)
              + maintenance_options (known after apply)
              + metadata_options (known after apply)
              + network_interface (known after apply)
              + primary_network_interface (known after apply)
              + private_dns_name_options (known after apply)
              + root_block_device (known after apply)
            }
        Plan: 1 to add, 0 to change, 0 to destroy.
        aws_instance.hello: Creating...
        aws_instance.hello: Still creating... [00m10s elapsed]
        aws_instance.hello: Creation complete after 18s [id=i-09233298b048820fa]
        Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
        taipham@Tais terraform % 

         

        - Verify on AWS that an instance named 'terraform1' has been successfully launched.


         

        - Delete the resources using the terraform destroy command.

        taipham@Tais-MacBook-Pro terraform % terraform destroy
        aws_instance.hello: Refreshing state... [id=i-09233298b048820fa]
        Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
        symbols:
          - destroy
        Terraform will perform the following actions:
          # aws_instance.hello will be destroyed
          - resource "aws_instance" "hello" {
              - ami                                  = "ami-0fa3fe0fa7920f68e" -> null
              - arn                                  = "arn:aws:ec2:us-east-1:075134876480:instance/i-09233298b048820fa" -> null
              - associate_public_ip_address          = true -> null
              - availability_zone                    = "us-east-1f" -> null
              - disable_api_stop                     = false -> null
              - disable_api_termination              = false -> null
              - ebs_optimized                        = false -> null
              - force_destroy                        = false -> null
              - get_password_data                    = false -> null
              - hibernation                          = false -> null
              - id                                   = "i-09233298b048820fa" -> null
              - instance_initiated_shutdown_behavior = "stop" -> null
              - instance_state                       = "running" -> null
              - instance_type                        = "t3.micro" -> null
              - ipv6_address_count                   = 0 -> null
              - ipv6_addresses                       = [] -> null
              - monitoring                           = false -> null
              - placement_partition_number           = 0 -> null
              - primary_network_interface_id         = "eni-0a9eef084942a77ad" -> null
              - private_dns                          = "ip-172-31-64-53.ec2.internal" -> null
              - private_ip                           = "172.31.64.53" -> null
              - public_dns                           = "ec2-13-220-15-145.compute-1.amazonaws.com" -> null
              - public_ip                            = "13.220.15.145" -> null
              - region                               = "us-east-1" -> null
              - secondary_private_ips                = [] -> null
              - security_groups                      = [
                  - "default",
                ] -> null
              - source_dest_check                    = true -> null
              - subnet_id                            = "subnet-0cc7a1fa7009a4c48" -> null
              - tags                                 = {
                  - "Name" = "terraform1"
                } -> null
              - tags_all                             = {
                  - "Name" = "terraform1"
                } -> null
              - tenancy                              = "default" -> null
              - user_data_replace_on_change          = false -> null
              - vpc_security_group_ids               = [
                  - "sg-067709313e763db77",
                ] -> null
                # (9 unchanged attributes hidden)
              - capacity_reservation_specification {
                  - capacity_reservation_preference = "open" -> null
                }
              - cpu_options {
                  - core_count       = 1 -> null
                  - threads_per_core = 2 -> null
                    # (1 unchanged attribute hidden)
                }
              - credit_specification {
                  - cpu_credits = "unlimited" -> null
                }
              - enclave_options {
                  - enabled = false -> null
                }
              - maintenance_options {
                  - auto_recovery = "default" -> null
                }
              - metadata_options {
                  - http_endpoint               = "enabled" -> null
                  - http_protocol_ipv6          = "disabled" -> null
                  - http_put_response_hop_limit = 2 -> null
                  - http_tokens                 = "required" -> null
                  - instance_metadata_tags      = "disabled" -> null
                }
              - primary_network_interface {
                  - delete_on_termination = true -> null
                  - network_interface_id  = "eni-0a9eef084942a77ad" -> null
                }
              - private_dns_name_options {
                  - enable_resource_name_dns_a_record    = false -> null
                  - enable_resource_name_dns_aaaa_record = false -> null
                  - hostname_type                        = "ip-name" -> null
                }
              - root_block_device {
                  - delete_on_termination = true -> null
                  - device_name           = "/dev/xvda" -> null
                  - encrypted             = false -> null
                  - iops                  = 3000 -> null
                  - tags                  = {} -> null
                  - tags_all              = {} -> null
                  - throughput            = 125 -> null
                  - volume_id             = "vol-0cf12ff2a72cff8c0" -> null
                  - volume_size           = 8 -> null
                  - volume_type           = "gp3" -> null
                    # (1 unchanged attribute hidden)
                }
            }
        Plan: 0 to add, 0 to change, 1 to destroy.
        Do you really want to destroy all resources?
          Terraform will destroy all your managed infrastructure, as shown above.
          There is no undo. Only 'yes' will be accepted to confirm.
          Enter a value: yes
        aws_instance.hello: Destroying... [id=i-09233298b048820fa]
        aws_instance.hello: Still destroying... [id=i-09233298b048820fa, 00m10s elapsed]
        aws_instance.hello: Still destroying... [id=i-09233298b048820fa, 00m20s elapsed]
        aws_instance.hello: Still destroying... [id=i-09233298b048820fa, 00m30s elapsed]
        aws_instance.hello: Destruction complete after 32s
        Destroy complete! Resources: 1 destroyed.

         

        - The instance named 'terraform1' has been terminated.

        Conclusion

        In summary, Terraform is a powerful Infrastructure as Code (IaC) tool by HashiCorp that enables users to manage multi-cloud infrastructure (such as AWS, Azure, and GCP) through declarative HCL code rather than manual operations. Its core workflow revolves around four main steps: Writing Configuration, Initialization (terraform init), Planning (terraform plan), and Applying (terraform apply).

        A critical component of this process is the State File, which records the current status of the infrastructure to track differences between the code and the actual environment, ensuring consistency especially in team settings. As demonstrated in the practical AWS example, Terraform allows for the complete lifecycle management of resources—from provisioning an EC2 instance with terraform apply to removing it with terraform destroy—ensuring automation and precision.

        Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

        [References]

        https://developer.hashicorp.com/terraform
        https://www.sudeepa.com/?p=382 [Image link]

        View More
        TECH

        December 3, 2025

        Introducing Vite.js: The Fast Frontend Build Tool

        Building modern web apps can be slow and frustrating if your tools are not efficient. Vite.js is a modern frontend build tool that makes development faster and smoother, helping developers focus on writing code instead of waiting for builds.

        What is Vite.js?

        Vite.js is a next-generation frontend build tool created by Evan You, the developer behind Vue.js. Unlike traditional bundlers, Vite serves source files on-demand using native ES modules, which means:

        • Almost instant server start
        • Fast hot module replacement (HMR) during development
        • Optimized production builds using Rollup

        Vite works with any frontend framework, including Vue, React, Svelte, or even vanilla JavaScript.

        Key Benefits

        1. Super fast development server – no long waits.

        2. Instant hot module replacement – changes appear immediately in the browser.

        3. Optimized production build – small and efficient bundles.

        4. Framework agnostic – works with Vue, React, Svelte, or vanilla JS.

        Getting Started (Example with Vue 3)

        Here’s a quick example of using Vite with Vue 3:

        npm create vite@latest my-vue-app
        cd my-vue-app
        npm install
        npm run dev

        And a simple Vue component:

        <template>
           <h1>{{ message }}</h1>
        </template>

        <script setup>
        import { ref } from 'vue'
        const message = ref('Hello from Vite + Vue 3!')
        </script>

        With HMR, any changes to this component are instantly reflected in the browser.

        Why Choose Vite?

        Traditional bundlers like Webpack can become slow as projects grow. Vite solves this by pre-bundling dependencies and using native ESM, making it ideal for modern frontend development.

        If you want fast iteration, easy setup, and optimized builds, Vite is a great choice for any frontend project.

        Conclusion

        Vite.js simplifies frontend development with blazing-fast startup, instant HMR, and optimized production builds. Its speed and efficiency make it a powerful tool for modern web development.

        If you're seeking a reliable, long-term partner who values collaboration and shared growth, feel free to reach out to us here: Contact ISB Vietnam

        [References]

        View More
        TECH

        December 3, 2025

        SPRING BOOT SECURITY: SECURING WEB APPLICATIONS

        In modern web application development, security is essential. Spring Boot integrates seamlessly with Spring Security, providing mechanisms for authentication, authorization, and endpoint protection out-of-the-box. This section explains Spring Boot Security, how to configure it, practical examples, and its benefits.

        I. What is Spring Boot Security?

        Spring Boot Security is a powerful framework that helps you:

        • Authenticate users (Authentication)
        • Control access to resources (Authorization)
        • Protect your application against common attacks such as CSRF, XSS, and session fixation
        • Support OAuth2, JWT, Basic Auth, and Form Login

        By adding the spring-boot-starter-security dependency, Spring Boot Security can secure your application with minimal configuration.

        II. How Does Spring Boot Security Work?

        Spring Boot Security works using Filter Chains and the SecurityContext:

        • Filter Chain: Intercepts all incoming requests and applies security rules.
        • AuthenticationManager: Validates user credentials.
        • Authorization: Determines if the authenticated user has permission to access a resource.
        • PasswordEncoder: Hashes passwords for secure storage.

        By default, if you add spring-boot-starter-security, all endpoints require login with a default username: user and a generated password.

        III. How to Use Spring Boot Security

        1. Add Dependency

        Maven:

        <dependency>

          <groupId>org.springframework.boot</groupId>

          <artifactId>spring-boot-starter-security</artifactId>

        </dependency>

        Gradle:

        implementation 'org.springframework.boot:spring-boot-starter-security

         

        2. Configure Security

        You can customize security using SecurityFilterChain (Spring Boot 2.7+ / 3.x):

        @Configuration

        @EnableWebSecuritypublic class SecurityConfig {    

           @Bean   

           public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {       

                 http           

                       .csrf().disable() 

                       .authorizeHttpRequests(auth -> auth               

                                     .requestMatchers("/admin/**")

                                    .hasRole("ADMIN")              

                                     .requestMatchers("/user/**")

                                    .hasAnyRole("USER", "ADMIN")               

                                    .anyRequest().permitAll())    

                       .formLogin() 

                       .and()           

                       .httpBasic();       

                return http.build();   

           }    

           @Bean

           public PasswordEncoder passwordEncoder() {       

                 return new BCryptPasswordEncoder();   

           }

        }

         

        • /admin/** is accessible only by admins
        • /user/** is accessible by both users and admins
        • All other endpoints are public
        • Supports form login and HTTP Basic authentication

        3. Im-Memory Authentication Example

        @Bean

        public UserDetailsService users() {   

            UserDetails user = User.builder()       

                                    .username("user")       

                                    .password(passwordEncoder().encode("password"))       

                                    .roles("USER")       

                                    .build();        

            UserDetails admin = User.builder()       

                                    .username("admin")       

                                    .password(passwordEncoder().encode("admin123"))       

                                    .roles("ADMIN")       

                                    .build();        

             return new InMemoryUserDetailsManager(user, admin);

        }

         

        This is a quick way to test security without a database.

        IV. Example of Spring Boot Security in Action

        Protecting a REST API:

        @RestController

        @RequestMapping("/admin")

        public class AdminController {        

            @GetMapping("/dashboard")   

            public String dashboard() {       

                 return "Admin Dashboard";    

            }

        }

         

        When accessing /admin/dashboard:

        • If not logged in → redirected to login page
        • If logged in with a user without ADMIN role → 403Forbidden
        • If logged in as admin → access granted

        V. Benefits of Spring Boot Security

        • Secure by default: Adding the dependency provides login and basic security.
        • Flexible authentication and authorization: Role-based, permission-based, JWT, OAuth2 support.
        • Protection against common attacks: CSRF, XSS, session fixation.
        • Highly customizable: Form login, REST API security, method-level security (@PreAuthorize).
        • Easy integration with DB or OAuth2 providers: JDBC, LDAP, Keycloak, Google, Facebook.

        VI. Conclusion

        Spring Boot Security allows you to secure web applications easily, with minimal configuration and robust features. It not only protects your endpoints but also scales for complex authentication and authorization needs, letting developers focus more on business logic rather than security boilerplate.

        Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

        [References]

        https://docs.spring.io/spring-boot/docs/current/reference/html/security.html

        https://docs.spring.io/spring-security/reference/index.html

        https://www.baeldung.com/spring-boot-security

        https://spring.io/guides/gs/securing-web/

        https://www.baeldung.com/spring-security-jwt

        View More
        TECH

        December 2, 2025

        Build an App from SharePoint with Power Apps – Part 2

        Welcome back to our series “From Zero to Digital Hero with Power Apps.”
        In the previous post, you learned what Power Apps is and why it’s a game-changer for modern workplaces.
        Now, it’s time to turn theory into action — we’ll build your very first app, using SharePoint as the data source.

        https://isb-vietnam.net/blog/tech/what-is-power-apps-starting-your-digital-journey-from-zero-part-1/

        1. Prepare Your Data in SharePoint

        Before creating the app, we need a list that stores all our information.
        Think of SharePoint lists like a smart Excel table — accessible online, collaborative, and ready to connect with Power Apps.
        Example: Employee Requests list

        How to create it:

        • Go to your SharePoint site (e.g., https://yourcompany.sharepoint.com/sites/team)

        • Click New → List → Blank list

        • Name it EmployeeRequests

        • Add columns:

          • Title – Request title (default column)

          • EmployeeName – Single line of text

          • Department – Choice: HR, IT, Finance, Sales

          • RequestDate – Date and Time

          • Status – Choice: Pending, Approved, Rejected

        *Avoid spaces in column names (e.g., use EmployeeName instead of Employee Name) to prevent syntax issues in Power Apps.

        2. Create the App Directly from SharePoint

        Now comes the fun part — generating the app automatically!

        1. Open your EmployeeRequests list in SharePoint.

        2. On the top menu, select Integrate → Power Apps → Create an app.

        3. Give your app a name, for example: Employee Request App.

        4. Wait a few moments — Power Apps will build your app with three default screens:

          • Browse Screen – View all requests

          • Detail Screen – View request details

          • Edit Screen – Create or modify a request

        Behind the scenes: Power Apps automatically reads your list structure and builds a connected interface in seconds — something that used to take hours of coding!

        3. Customize Your App

        Once the app opens in Power Apps Studio, it’s time to make it yours.

        • Change the title:
          Select the top label → Rename it to Employee Requests Management

        • Adjust the layout:
          Choose the gallery → Layout → Title and Subtitle

        • Show the right fields:

          • Title = ThisItem.EmployeeName

          • Subtitle = ThisItem.Status

        *Keep your design simple. End users prefer fewer clicks and clear labels — not fancy graphics.

        4. Add More Data Connections (Optional)

        Want to expand your app’s power?
        Connect additional data sources like Microsoft Teams, Outlook, or another SharePoint list (e.g., Departments) for lookups.

        How to do it:

        Go to Data → Add data → SharePoint → Select your site → Choose the list.

        5. Test and Share Your App

        Click Play ▶️ to test your app.

        Try these actions:

        • Create a new request

        • Update the status (Pending → Approved)

        • Delete a record

        When ready:

        1. Go to File → Save → Publish

        2. Return to SharePoint

        3. You’ll find your app under Power Apps → Apps

        Share your success: Click Share to invite colleagues, or embed the app directly in your SharePoint page or Teams channel for daily use
         

         

        Conclusion – You’ve Built Your First Power App!

        Congratulations — you’ve just created a working app straight from a SharePoint list without writing a single line of code!

        Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

        View More
        TECH

        December 2, 2025

        The Ultimate Guide to Detecting TURN Server Usage in WebRTC

        How to Detect TURN Server Usage in WebRTC

        In WebRTC, connecting two peers involves finding the best path using Interactive Connectivity Establishment (ICE). This path can be direct (peer-to-peer) or relayed via a TURN server. Knowing which path your connection uses is important for monitoring performance and managing costs.

        Your connection type—direct or relayed—is determined by the selected ICE candidate.

        The Three Types of ICE Candidates

        ICE candidates are the network addresses your browser discovers to reach a remote peer:

        Candidate Type Description Connection Path
        host Direct local IP (LAN) Direct (Local Network)
        srflx Public IP discovered via STUN Direct (Internet P2P)
        relay Routed through a TURN server Relayed (TURN Server)

        Tip: If the selected candidate type is relay, your connection is definitely using a TURN server

        Step 1: Listen for ICE Candidates

        You can track all discovered candidates by listening for the icecandidate event on your RTCPeerConnection:

        peerConnection.addEventListener("icecandidate", event => {
               if (event.candidate) {
                     console.log("ICE candidate:", event.candidate.candidate);
                   // Look for "typ relay" to detect TURN usage
               }
        });

        Step 2: Check the Selected Candidate Pair Using getStats()

        The most reliable method is using the getStats() API. This reports the candidate pair that has been selected and successfully connected.

        async function checkTurnUsage(peerConnection) {
               const stats = await peerConnection.getStats();

               stats.forEach(report => {
                     if (report.type === "candidate-pair" && report.state === "succeeded" && report.selected) {
                           const local = stats.get(report.localCandidateId);
                           const remote = stats.get(report.remoteCandidateId);

                           if (local?.candidateType === "relay" || remote?.candidateType === "relay") {
                                 console.log("Connection is using TURN (relay).");
                           } else {
                                 console.log("Connection is direct (host/srflx).");
                           }
                     }
               });
        }

        Step 3: Continuously Monitor Path Changes

        WebRTC connections can switch ICE candidates if network conditions change. To monitor dynamically, listen for relevant events:

        // When the selected candidate pair changes
        peerConnection.addEventListener("icecandidatepairchange", () => {
               checkTurnUsage(peerConnection);
        });

        // When the connection becomes stable
        peerConnection.addEventListener("iceconnectionstatechange", () => {
               if (peerConnection.iceConnectionState === "connected" ||
                    peerConnection.iceConnectionState === "completed") {
                     checkTurnUsage(peerConnection);
               }
        });

        Summary Checklist

        Action Purpose Key Indicator
        Check ICE Candidate Type Identify potential paths host → local / srflx → direct P2P / relay → TURN
        Use getStats() Confirm selected pair Look for candidateType: "relay"
        Monitor Events Track dynamic changes icecandidatepairchange or iceconnectionstatechange

         

        Conclusion

        Detecting TURN server usage is crucial for optimizing WebRTC performance and controlling costs. By understanding host, srflx, and relay candidates, using getStats() to verify the selected pair, and monitoring events for changes, developers can ensure reliable, real-time connectivity. This approach helps deliver smooth, high-quality WebRTC experiences while keeping infrastructure usage efficient.

        Ready to get started?

        Contact IVC for a free consultation and discover how we can help your business grow online.

        Contact IVC for a Free Consultation
        View More
        1 2 3 4 22