...

What We Think

Blog

Keep up with the latest in technological advancements and business strategies, with thought leadership articles contributed by our staff.
TECH

December 1, 2025

Static Analysis in CakePHP with PHP CodeSniffer

When working on a large CakePHP project with multiple developers, code style can easily become inconsistent. As a result, maintaining a clean and consistent codebase can be challenging. Fortunately, PHP CodeSniffer (PHPCS) is a simple yet powerful static analysis tool that checks whether your code follows a chosen coding standard.

1. Key Features

  • Detects coding standard violations such as wrong indentation, missing spaces, or incorrect naming.
  • Supports popular standards like PSR-2, PSR-12, and CakePHP style.
  • Generates detailed reports about which files and lines need attention.
  • Lightweight and fast — scans text files without executing code.
  • Easily integrates into CI/CD pipelines for automated style checks before merge.

2. Benefits of Using PHPCS

  • Ensures consistent code style across the team.
  • Detects formatting issues early, before they reach production.
  • Makes code reviews faster by focusing reviewers on logic instead of style.
  • Works well with legacy code — issues can be fixed gradually.
  • Encourages good coding discipline and long-term maintainability.

3. How to Use

Step 1: Install PHPCS

composer require --dev "squizlabs/php_codesniffer=*"

Step 2: Run the Check

vendor/bin/phpcs --standard=CakePHP src/

Step 3: Output Results to a File

vendor/bin/phpcs --standard=CakePHP src/ > phpcs-report.txt

  • Optional: Use a bash script for convenience:
    #!/bin/bash
    echo "Running PHP CodeSniffer..."
    vendor/bin/phpcs --standard=CakePHP src/ > phpcs-report.txt
    echo "Done! Report saved to phpcs-report.txt"

4. Review the Output

The generated phpcs-report.txt will look like:
--------------------------------------------------------------------------------
FOUND 4 ERRORS AFFECTING 3 LINES
--------------------------------------------------------------------------------
15 | ERROR | Expected 1 space after FUNCTION keyword; 0 found
20 | ERROR | Line exceeds 120 characters; contains 134 characters
45 | ERROR | Missing doc comment for function getUserList()
--------------------------------------------------------------------------------

5. Conclusion

PHP CodeSniffer is a lightweight and effective static analysis tool for PHP projects, including CakePHP.
It helps you detect coding standard violations early, improves team consistency, and keeps your codebase clean and maintainable.

Even without auto-fix features, PHPCS is invaluable for:

  • Keeping code style consistent across the team
  • Reducing noise during code reviews
  • Maintaining high-quality, readable, and maintainable CakePHP projects

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

[References]

https://github.com/squizlabs/PHP_CodeSnifferPHP CodeSniffer (GitHub)

https://book.cakephp.org/4/en/development/coding-standards.htmlCakePHP Coding Standards

https://www.php-fig.org/psr/psr-12/PSR-12 Coding StandardPSR-12 Coding Standard

https://surl.li/pmofsk (Image)

View More
TECH

November 28, 2025

Composition API & Options API in Vue 3. Which should we choose?

In Vue 3, there are two main approaches for creating and managing components: Composition API and Options API. Each approach has its own advantages and disadvantages, and choosing between them depends on project requirements and the development team's programming style.

Options API

The Options API is the traditional approach in Vue. It focuses on defining the parts of a component through objects such as data, methods, computed, watch, and props. This approach was introduced in Vue 2.x and is still widely used in Vue 3 for maintaining compatibility and simplicity.

Features of the Options API

- Simple to manage: The parts of the component are organized into separate options (data, methods, computed, etc.), making it easy to read and understand, especially for newcomers to Vue.

- Clear and structured: The component structure is very clear, and it’s easy to separate and maintain in smaller projects.

- Compatibility with Vue 2.x: The Options API was the main way to define components in Vue 2.x and is still fully supported in Vue 3. This makes it easier for developers transitioning from Vue 2 to Vue 3.

Example with Options API:

<template>
<div>
<h1>{{ message }}</h1>
<button @click="updateMessage">Update Message</button>
</div>
</template>

<script>
export default {
 data() {
   return {
   message: "Hello from Options API"
   };
 },
 methods: {
  updateMessage() {
  this.message = "Message Updated!";
  }
 }
};
</script>

Composition API

The Composition API was introduced in Vue 3, allowing developers to break down component logic in a more flexible, function-based manner instead of using objects like in the Options API. This approach helps improve code reuse, flexibility, and maintainability, especially in large applications.

Features of the Composition API

- Easier code reuse: Logic in components can be reused easily through functions, making the codebase cleaner and easier to maintain.

- Flexibility: Composition API allows you to organize your code based on related functionality, rather than being confined to predefined options like data, methods, computed, etc.

- Ideal for large projects: Especially in complex components and managing multiple states, Composition API provides a more efficient approach.

Example with Composition API:

<template>
<div>
<h1>{{ message }}</h1>
<button @click="updateMessage">Update Message</button>
</div>
</template>

<script>
import { ref } from 'vue';

 export default {
  setup() {
  const message = ref("Hello from Composition API");

  const updateMessage = () => {
  message.value = "Message Updated!";
 };

 return {
  message,
  updateMessage
  };
 }
};
</script>

Comparison Between Composition API and Options API

Feature Options API Composition API
Approach Uses objects like data, methods, computed, etc. Breaks down logic into separate functions in setup().
Logic Reusability Harder to reuse logic across components. Easier to reuse logic across components through functions.
Complexity Simple and easy to apply in small projects. Ideal for complex projects and better for large codebases.
Code Readability Easy to read and understand for beginners. Can be harder to understand, especially in large components.
State Management Easy state management with data and methods. More flexible state management with ref and reactive.
Upgrade Vue 2.x uses Options API, and migration is straightforward. Composition API is a Vue 3 enhancement, encouraged in Vue 3.
Lifecycle Management Uses lifecycle hooks like mounted, created, etc. Uses lifecycle hooks inside setup() with a more flexible syntax.

 

Which should we use?

The Options API should be used for:

- Small Projects: When you have a small or simple application, the Options API is easy to use and doesn’t require much organization.

- For beginners with Vue: Those new to Vue will find the Options API easier to grasp due to its clear structure.

- Vue 2.x Compatibility: Since the Options API was the standard in Vue 2.x, it’s the best choice when migrating a Vue 2 project to Vue 3, as it’s fully supported in Vue 3.

The Composition API should be used for:

- Large and Complex Projects: The Composition API is great for large-scale applications where you need to manage complex components and states.

- Logic Reusability: When you need to reuse logic across different components, the Composition API offers a more efficient way to share code.

- Typescript Support: It is more in line with TypeScript features and flexibility than the Options API.

Conclusion

- Options API: easy to use and best suited for smaller applications or when you’re just starting with Vue. It provides a clear and structured way to organize code and is compatible with Vue 2.x, making it ideal for projects migrating from Vue 2 to Vue 3.

- Composition API: offers more flexibility, better code reuse, and scalability, making it ideal for larger applications or when working with complex components. It also works very well with TypeScript, making it the preferred choice for projects that require strong typing and better code organization.

Although Vue 3 encourages using the Composition API, you can still mix both approaches in a project, depending on the specific situation and needs of your components.

References

https://vueschool.io/articles/vuejs-tutorials/options-api-vs-composition-api/

https://blog.logrocket.com/comparing-vue-3-options-api-composition-api/

Image source: https://www.freepik.com/free-photo/php-programming-html-coding-cyberspace-concept_17105500.htm

Ready to get started?

Contact IVC for a free consultation and discover how we can help your business grow online.

Contact IVC for a Free Consultation
View More
TECH

November 28, 2025

From AR to XR: A Developer-Friendly Tour (with a Qt Quick 3D XR Example)

From AR to XR: A Developer-Friendly Tour

Extended Reality (XR) has quietly gone from science fiction to “just another platform” we target alongside web and mobile. If you’re already building with Qt and curious how your skills fit into AR/VR/XR, this post walks through:

  • What AR and XR actually mean (and where they came from)
  • How AR and XR differ
  • Today’s mainstream XR devices
  • Common domains where XR is used
  • Where XR is going next
  • Programming languages & engines behind XR apps
  • A small Qt Quick 3D XR sample, with explanation of each QML type

AR & XR: Definitions and a Short History

What is Augmented Reality (AR)?

Augmented Reality overlays digital content onto the physical world, in real time, and in the correct 3D position. Classic examples are:

  • Visual instructions overlaid on machinery
  • Filters and effects in mobile apps
  • Navigation arrows painted onto streets or dashboards

A commonly used definition describes AR as systems that (1) combine real and virtual content, (2) run in real time, and (3) register virtual objects in 3D with the real world. Wikipedia

Key historical milestones:

  • 1968 – Ivan Sutherland’s head-mounted display (often cited as the first AR/VR HMD). G2
  • 1990 – Tim Caudell at Boeing coins the term “augmented reality”. G2
  • 2000s – HUDs(Head Up Display), industrial AR, early marker-based mobile AR. Wikipedia
  • 2010s – ARKit (Apple), ARCore (Google) take AR to mainstream phones.
  • 2020s – Mixed reality(MR) headsets with color passthrough blur AR/VR (Meta Quest 3, Apple Vision Pro).

What is Extended Reality (XR)?

Extended Reality (XR) is an umbrella term that covers VR, AR and MR, plus anything in between. More formally, XR refers to technologies that combine or interpolate between purely virtual and purely real environments, often using a “digital twin” of the physical world. Wikipedia

Think of XR as the whole spectrum:

  • VR – fully virtual world
  • MR – real world plus interactive virtual objects anchored in space
  • AR – lightweight overlays, often through phones or glasses

XR lets us talk about all of these together without obsessing over the exact label.

AR vs XR: What’s Actually Different?

Short answer: AR is one point on the XR spectrum.

  • Scope
    • AR is a specific technique: augmenting reality with digital overlays.
    • XR is the category that includes AR, VR, and MR—and sometimes even “beyond human senses” visualizations (e.g., seeing radio waves or air quality as graphics). Wikipedia
  • Devices
    • AR often runs on phones/tablets (ARKit/ARCore) or see-through glasses.
    • XR includes fully immersive headsets (VR), mixed-reality HMDs with passthrough cameras, and more experimental smart glasses.
  • Interaction
    • AR apps may only track a surface or image target.
    • XR apps typically track head pose, hands, controllers, depth, and the room itself, supporting room-scale experiences and precise spatial interaction.

So when you build a Quest 3 or Vision Pro app with both passthrough and fully immersive modes, you’re squarely in XR. When you ship an iOS app that puts a virtual sofa on a real floor via ARKit, that’s “just” AR.

Today’s XR Device Landscape

The hardware scene changes fast, but as of now, several product families dominate:

Meta Quest line

Meta’s Quest devices (Quest 3, Quest 3S, and special bundles like the Xbox Edition) are the most widely used consumer XR headsets, offering standalone VR with color passthrough MR. The Quest 3 is a standalone headset with a Snapdragon XR2 Gen 2, high-resolution displays, and color passthrough to mix virtual objects with the real world. VRcompare+1

Apple Vision Pro

Apple Vision Pro is positioned as a “spatial computer” rather than a VR headset. It uses high-resolution micro-OLED displays, precise eye/hand tracking, and an external “EyeSight” display to show the user’s eyes to people nearby. It runs visionOS, built on iPadOS frameworks, and heavily uses ARKit and spatial UI concepts. Apple+2Wikipedia+2

Samsung Galaxy XR & Android XR Ecosystem

Samsung and Google have introduced the Galaxy XR headset, powered by Android XR and Snapdragon XR2+ Gen 2, targeting both entertainment and enterprise. It supports passthrough AR, PC VR streaming, and AI-enhanced experiences via Google Gemini. WIRED+1

PC VR & Others

  • Valve’s long-running Index headset is being sunset in favor of a new device called Steam Frame, with higher resolution and standalone capability. The Verge
  • HTC Vive, Pico headsets, and enterprise-focused devices (HoloLens, Varjo) cover specific niches like simulation and industrial training.

Where XR Is Used Today

XR is no longer just for games. Some of the most active domains include:

  • Gaming & Entertainment – Immersive games, spatial cinema, location-based experiences, and festivals like Venice Immersive that blend film and XR storytelling. The Guardian+1
  • Training & Simulation – Flight simulators, manufacturing procedures, emergency response training. AR is used to overlay procedures on equipment; XR puts trainees in lifelike scenarios. Wikipedia+1
  • Healthcare – Surgical planning, medical training, anatomy visualization, rehab exercises, and AR overlays during surgery. Wikipedia
  • Architecture, Construction & Real Estate – Walk through buildings before they exist, overlay BIM models on construction sites, or show clients “digital twins” of spaces. Wikipedia+1
  • Remote Collaboration & Productivity – Spatial whiteboards, multi-screen virtual desktops (e.g., Windows 11 remote desktop in Quest 3), and 3D data exploration. The Verge+1

If you’re already building remote monitoring, control panels, or dashboards on PC and XR (which you are!), you’re essentially working in this “spatial productivity” space.

The Future of XR Applications

XR’s near-future trajectory looks something like this:

  • More Mixed Reality, Less “Blind VR”
    Color passthrough and room understanding (depth, spatial mapping) make headsets usable as AR devices at home and at work. Quest 3, Vision Pro, and Galaxy XR are all designed for everyday passthrough use. VRcompare+2Apple+2
  • Slimmer Hardware & Smart Glasses
    Research devices and early smart-glasses prototypes (including Android XR-powered glasses) hint at lighter, glasses-like form factors for notifications, translation, and contextual help. WIRED+1
  • AI-Powered Spatial Experiences
    On-device AI (Gemini, Apple’s on-device models, etc.) will turn XR into an always-on assistant that understands your environment: recognizing objects, transcribing conversations, summarizing documents pinned in space, and more. WIRED+1
  • Deeper Vertical Integration
    Expect more specialized XR apps: surgical guidance, industrial digital twins, spatial cinema, and educational content with strong domain knowledge, not just generic demos. Wikipedia+1

The devices are finally good enough that the bottleneck is shifting from hardware to content and UX—which is where frameworks like Qt, Unity, and Unreal come in.

Programming Languages & Engines for XR

XR apps usually sit on top of an engine or framework. Under the hood, several languages dominate: Index.dev+1

  • C#
    • Primary language for Unity, historically the most popular engine for VR/AR games and experiences.
    • Widely used for Quest, PC VR, mobile AR (via AR Foundation), and many indie XR projects.
  • C++
    • Core language of Unreal Engine and many in-house engines.
    • Used when you need maximum performance or deep engine customization.
    • Also the foundation for many XR runtimes (OpenXR implementations, device SDKs).
  • Swift / Objective-C
    • For iOS, iPadOS, and visionOS apps using ARKit and RealityKit / Reality Composer Pro.
    • Swift + SwiftUI / RealityKit is the primary stack for Apple Vision Pro.
  • Java / Kotlin
    • For Android-level XR / ARCore integrations, especially when you need tight control over camera, sensors, or Android services.
  • JavaScript / TypeScript
    • For WebXR in browsers and frameworks like three.js, Babylon.js, and A-Frame.
    • Great for lightweight experiences or quick prototypes.
  • C++/QML (Qt)
    • With Qt Quick 3D XR, you can write cross-platform XR apps in QML with C++ backends, reusing your existing Qt skills. felgo.com+1
  • Python, Lua, etc.
    • Common in tooling, content generation, prototyping, and scripting inside some engines.

Given your current stack, Qt + C++/QML is a natural fit for XR dashboards, remote monitoring tools, and 3D control panels.

A Minimal Qt Quick 3D XR Example

Let’s finish with a simple, self-contained Qt Quick 3D XR scene that you can adapt.

Goal: Show how to:

  • Use XrView as the root (instead of Window + View3D)
  • Define a floor-based reference space
  • Place a cube for a table in front of the user
  • Visualize one controller as a cylinder for a ray stick and handle that follows the user’s hand

This follows the structure recommended in Qt’s own XR examples but is simplified for clarity. doc.qt.io+1

main.qml

// main.qml

// Minimal Qt Quick 3D XR scene

import QtQuick
import QtQuick.Controls

import QtQuick3D
import QtQuick3D.Helpers
import QtQuick3D.Xr

XrView {
        id: xrView

        xrOrigin: theOrigin

        environment: SceneEnvironment {
              id: sceneEnvironment
              lightProbe: Texture {
                    textureData: ProceduralSkyTextureData {
                    }
              }
              antialiasingMode: SceneEnvironment.MSAA
              antialiasingQuality: SceneEnvironment.High
              backgroundMode: SceneEnvironment.Color
              clearColor: "skyblue"
              probeHorizon: 0.5
        }

        DirectionalLight {
              eulerRotation.x: -30
              eulerRotation.y: -70
        }

        XrOrigin {
              id: theOrigin
              z: 100

              XrController {
                    id: rightController
                    controller: XrController.ControllerRight
                    poseSpace: XrController.AimPose

                    Node {
                          id: rayStick
                          property real length: 50

                          z: -length/2
                          Model {
                                eulerRotation.x: 90
                                scale: Qt.vector3d(0.02, rayStick.length/100, 0.02)
                                source: "#Cylinder"
                                materials: PrincipledMaterial { baseColor: "green"}
                                opacity: 0.5
                          }
                    }

                    Node {
                          id: rayHandle
                          z: 5
                          Model {
                                eulerRotation.x: 90
                                scale: Qt.vector3d(0.05, 0.10, 0.05)
                                source: "#Cylinder"
                                materials: PrincipledMaterial {
                                      baseColor: "black"
                                      roughness: 0.2
                                }
                          }
                    }
              }
        }

        Model {
              id: floor
              source: "#Rectangle"
              eulerRotation.x: -90
              scale: Qt.vector3d(5,5,5)
              materials: [ PrincipledMaterial {
                          baseColor: "green"
                    }
              ]
        }

        Model {
              id: table
              property real height: 70
              position: Qt.vector3d(0, height / 2, 0)
              source: "#Cube"
              scale: Qt.vector3d(3, height / 100, 1)
              materials: PrincipledMaterial {
                    baseColor: "#554433" //"brown" color
              }
        }
}

Output of above sample looks like below image:

What each piece does

  • XrView
    • Replaces View3D as the entry point for XR scenes.
    • Handles connection to the XR runtime (OpenXR, visionOS, etc.), head tracking, and multi-view rendering. doc.qt.io+1
  • SceneEnvironment
    • Controls background, lighting model, tonemapping, etc.
    • A procedural sky (ProceduralSkyTextureData) is used as a light probe, so the lighting looks more natural. We're setting "skyblue" as sky's color.
    • MSAA antialiasing improves edge quality.
    • Background is a solid sky blue color.
    • probeHorizon adjusts how the sky lighting fades near the horizon.
  • XrOrigin
    • Defines the origin of tracked space; controllers and hands are positioned relative to this.
    • In typical room-scale setups, the origin is near the center of the play area at floor height. felgo.com
    • Setting z: 100 means: the player’s origin is 100 units “forward” (along +Z) relative to your global coordinates (or vice versa, depending on how you think about your scene).
  • DirectionalLight
    • Simple “sunlight”. The eulerRotation angles position the light above and in front of the user.
  • Model for floor & cube
    • #Plane and #Cube are built-in primitive meshes from Qt Quick 3D Helpers.
    • We scale and rotate the plane to act as a floor, and we place a 30 cm cube 1 m in front of the user’s head.
  • XrController: representing the right-hand controller & the ray
    • Represents a tracked controller or hand pose in 3D.
    • controller: XrController.ControllerRight selects the right-hand device; poseSpace: XrController.AimPose tracks the “aim” ray used for pointing. felgo.com+1
    • Because XrController inherits from Node, the child Model (ray stick as Cylinder) automatically follows the controller’s position and orientation, acting as a visual marker for your hand/controller.
      • rayStick is a helper Node that draws a long, thin cylinder to visualize the pointing ray.
      • length controls how long the ray should be in front of the controller.
      • The Model:
        • Uses the built-in #Cylinder mesh.
        • Is rotated 90° in X so it extends along local Z.
        • Is scaled to be very thin and long; length sets its size.
        • Semi-transparent green → a typical laser pointer look.
      • z: -length/2 shifts the cylinder so its base sits near the controller and extends forward.
  • Floor and table: static world geometry
      • These are not children of XrOrigin, so they’re placed in world coordinates.
      • Uses a built-in #Rectangle mesh.
      • Rotated -90° in X so it lies flat in the XZ plane (Y up).
      • Scaled to make a larger floor (5×5).
      • Simple green(for floor), like brown(for table) material.

        From here, it’s a small step to what you’re already doing in your project: attaching QML panels, ray-picking boards, and synchronizing transforms across multiple devices.

        Conclusion: XR Is Just Another Platform You Can Own

        AR and XR can sound buzzword-heavy, but at this point they’re really just another runtime environment for the same core skills you already use: 3D thinking, good UX, and solid engineering.

        We saw how:

        • AR sits in the real world, overlaying digital content on reality.
        • XR is the bigger spectrum that includes AR, VR, and mixed-reality in between.
        • Today’s devices (Quest, Vision Pro, Galaxy XR, PC VR, etc.) are powerful enough that the hardware is no longer the main bottleneck—content and UX
        • XR is already transforming domains like training, healthcare, architecture, remote collaboration, and productivity, not just games.

        On the tooling side, engines like Unity and Unreal dominate classical game-style XR, but they’re not the only option. If you come from the Qt world, Qt Quick 3D XR lets you reuse your C++/QML skills to build spatial apps: control panels, dashboards, visualizations, multi-screen workspaces, and “serious” tools that live in 3D space. The small sample we walked through—XrView, XrOrigin, controllers, simple models—is already enough to:

        • Render 3D content in a headset
        • Track the user’s head and hands/controllers
        • Start experimenting with interaction, locomotion, and UI panels in 3D

        The big shift isn’t so much technical as mental: instead of “windows and tabs”, you’re placing objects, information, and tools in a room around the user. Once you accept that, all your existing experience with state management, networking, UI architecture, and performance suddenly becomes extremely valuable in XR.

        If you’re already comfortable with Qt and 3D, you’re closer to XR than you might think. Start with a tiny scene, add one or two interactive elements, and iterate. The step from “Qt desktop app” to “Qt XR app” is no longer a leap—it’s just your next branch

        Ready to get started?

        Contact IVC for a free consultation and discover how we can help your business grow online.

        Contact IVC for a Free Consultation

        References:

        https://en.wikipedia.org/wiki/Augmented_reality?

        https://www.g2.com/articles/history-of-augmented-reality?

        https://en.wikipedia.org/wiki/Extended_reality?

        https://vr-compare.com/headset/metaquest3?

        https://www.apple.com/newsroom/2023/06/introducing-apple-vision-pro/?

        https://www.wired.com/story/samsung-galaxy-xr-gemini-android-xr-mixed-reality-headset?

        https://www.theguardian.com/film/2025/aug/27/venice-film-festival-extended-reality-cinema-vr?

        https://www.index.dev/blog/top-programming-languages-ar-vr-game-development?

        https://felgo.com/doc/qt/qt-quick-3d-xr/?

        https://doc.qt.io/qt-6/qtquick3d-xr-simple-example.html?

         Image source:

        https://www.ourfriday.co.uk/image/cache/catalog/Oculus/oculus-3-7-800x800w.jpg

        https://media.wired.com/photos/647e2a2040f1b0ff578445a2/3:2/w_1920,c_limit/Apple-Vision-Pro-Gear.jpg

        https://www.ourfriday.co.uk/image/cache/catalog/Oculus/oculus-3-3-800x800.jpg

        https://e3.365dm.com/23/06/1600x900/skynews-apple-headset_6179275.jpg?20230605202646

        View More
        TECH

        November 28, 2025

        On C++26's Reflection

        On C++26's Reflection

        Table of Contents

        Introduction

        The C++26 standard is adding compile-time reflection to the language. This new feature enables C++ programs to inspect types, data members, and other program entities during compilation. As a result, developers can write more generic and less repetitive code for tasks such as serialization, validation, and code generation.

        This article provides an overview of C++26 reflection, as currently supported in Clang’s experimental branches, and presents a practical example: serializing templated structures to JSON with minimal boilerplate.

        What is Reflection?

        Reflection refers to a language feature that allows a program to inspect or manipulate its own structure—such as types, data members, functions, or other entities—during program execution or compilation. This capability is widely available in languages like Java and C#, where programs can query and interact with type information at runtime (runtime reflection).

        Historically, standard C++ has not provided built-in reflection. Developers have often relied on macros, manual coding, or third-party libraries to work around this limitation. As a result, tasks like serialization, validation, and automatic code generation have typically required repetitive boilerplate or external tools.

        C++26 introduces compile-time reflection, which provides access to type and member information while the code is being compiled, with no runtime overhead. This approach enables the generation of highly generic and maintainable code for a wide range of metaprogramming scenarios.

        The feature was introduced in paper P2996R13 and was voted into C++26 on June 25, 2025.

        Using C++26 Reflection with Clang

        At the time of writing, C++26 reflection is available in experimental branches of Clang, for example, Bloomberg's clang-p2996.

        The core syntax involves:

        • Including <experimental/meta>.
        • Using the ^^ operator to reflect on a type.
        • Enumerating members with utilities like nonstatic_data_members_of.
        • "Splicing" members into code with obj.[:m:] syntax.

        Example 1: Enumerating member names and values with reflection

        Suppose we have the following two structures and wish to print their members’ names and values:

        struct Point { int x = 1; int y = 2; };
        struct Person { std::string name = "Bob"; int age = 30; };
        

        Without reflection, one would probably have to write:

        #include <iostream>
        #include <string>
        struct Point { int x = 1; int y = 2; };
        struct Person { std::string name = "Bob"; int age = 30; };
        
        int main() {
            Point pt;
            Person person;
        
            std::cout << "x: " << pt.x << std::endl;
            std::cout << "y: " << pt.y << std::endl;
        
            std::cout << "name: " << person.name << std::endl;
            std::cout << "age: " << person.age << std::endl;
        }
        

        With reflection, it is possible to write a generic print_members() that works for any struct - no manual edits are needed if you add, remove, or change fields.

        #include <experimental/meta>
        #include <iostream>
        #include <string>
        
        template <typename T>
        void print_members(const T& obj) {
            constexpr auto ctx = std::meta::access_context::current();
            template for (constexpr auto member :
                std::define_static_array(nonstatic_data_members_of(^^T, ctx))) {
                std::cout << identifier_of(member) << ": " << obj.[:member:] << std::endl;
            }
        }
        
        struct Point { int x = 1; int y = 2; };
        struct Person { std::string name = "Bob"; int age = 30; };
        
        int main() {
            Point pt;
            Person person;
        
            print_members(pt);
            print_members(person);
        }
        

        The above code yields:

        x: 1
        y: 2
        name: Bob
        age: 30
        

        Example 2: JSON Serialization of Structures

        Below is a single-file example using Clang’s <experimental/meta> extension for C++26 reflection. The code provides a function to serialize any struct (with appropriate members) to a JSON string.

        The function is then called on several different structures, including the two structure types (Point and Person) in the previous section and an additional User struct with two public and one private field

        #include <experimental/meta>
        #include <iostream>
        #include <string>
        #include <type_traits>
        
        template <typename T>
        std::string generate_json_str(const T& obj) {
            std::string json = "{";
            constexpr auto ctx = std::meta::access_context::current();
            bool first = true;
        
            template for (constexpr auto m :
                std::define_static_array(nonstatic_data_members_of(^^T, ctx))) {
                if (!first) json += ", ";
                first = false;
                json += "\"";
                json += identifier_of(m);
                json += "\": ";
                // Add quotes for string members
                if constexpr (std::is_same_v<decltype(obj.[:m:]), std::string>) {
                    json += "\"";
                    json += obj.[:m:];
                    json += "\"";
                } else {
                    json += std::to_string(obj.[:m:]);
                }
            }
            json += "}";
            return json;
        }
        
        struct Point { int x = 1; int y = 2; };
        struct Person { std::string name = "Bob"; int age = 30; };
        
        struct User {
            int id;
            std::string name;
        private:
            double balance;
        public:
            User(int i, std::string n, double b)
                : id(i), name(std::move(n)), balance(b) {}
        };
        
        int main() {
            Point point;
            Person person;
            User user{1, "Alice", 123.45};
        
            std::cout << "JSON of point: " << generate_json_str(point) << std::endl;
            std::cout << "JSON of person: " << generate_json_str(person) << std::endl;
            std::cout << "JSON of user: " << generate_json_str(user) << std::endl;
        }
        

        Outputs:

        JSON of point: {"x": 1, "y": 2}
        JSON of person: {"name": "Bob", "age": 30}
        JSON of user: {"id": 1, "name": "Alice"}
        

        Outro

        This article has demonstrated only a small subset of the potential applications for C++26’s reflection facilities. The key takeaway is that compile-time reflection enables the creation of efficient and reusable code, with strong type safety enforced at compile time. Although some of the new syntax may appear complex at first glance, its use quickly becomes practical with familiarity.

        As compiler and library support matures, compile-time reflection is likely to simplify and streamline many codebases and tooling workflows in the C++ ecosystem.

        References

        Ready to get started?

        Contact IVC for a free consultation and discover how we can help your business grow online.

        Contact IVC for a Free Consultation
        View More
        TECH

        November 28, 2025

        AngularJS & Angular How To Run Together

        I. Background – Why Run Two Versions at Once

        AngularJS (1.x) was once a very popular front-end framework, and many applications built with it still run smoothly today.
        As technology evolves, teams want to move to modern Angular (2+) for its TypeScript support, cleaner architecture, better tools, and long-term maintenance.
        However, rewriting a large AngularJS project from scratch can be time-consuming and risky.
        That’s why many developers choose to run AngularJS and Angular together in a hybrid setup — this approach saves time and costs while still ensuring an effective migration process and keeping the system running normally.

        II. The Official Tool – @angular/upgrade

        To make AngularJS and Angular work together, the Angular team released an official package called @angular/upgrade.
        It acts as a bridge between the two frameworks, allowing them to share the same DOM, services, and data.

        You can install it easily:

          npm install @angular/upgrade @angular/upgrade-static

        With this tool, you can:

        • Start (bootstrap) both frameworks at the same time.
        • Use Angular components inside AngularJS (downgrade).
        • Use AngularJS services inside Angular (upgrade).
        • Let both frameworks communicate smoothly in one app.

        This is an official and stable migration solution, fully supported by the Angular team — not a workaround or a temporary solution.

        III. Step-by-Step Implementation


        Step 1: Bootstrap Both Frameworks

        In your main entry file, initialize Angular and AngularJS to run together:

        View More
        TECH

        November 28, 2025

        SPRING BOOT AUTO CONFIGURATION: SIMPLIFYING CONFIGURATION FOR DEVELOPERS

        In the world of modern web application development, configuration can be a tedious and time-consuming task, especially when it comes to setting up various services, databases, and libraries. One of the standout features of Spring Boot is its auto configuration, which significantly simplifies this process. Let’s dive into what auto configuration is and how it can improve your development workflow.

         

        I. What is Auto Configuration in Spring Boot?

        Auto Configuration is one of the most powerful features of Spring Boot. It’s designed to automatically configure application components based on the libraries that are present in the classpath. Spring Boot’s auto-configuration mechanism attempts to guess and set up the most sensible configuration based on the environment and the dependencies that you are using in your application.

        For example, if you are using Spring Data JPA in your application, Spring Boot will automatically configure the EntityManagerFactory, a datasource, and other required beans based on the properties you define in your application.properties file. This significantly reduces the amount of manual configuration and setup.

         

        II. How Does Auto Configuration Work?

        Spring Boot uses the @EnableAutoConfiguration annotation (which is included by default in @SpringBootApplication) to enable this feature. This annotation tells Spring Boot to look for @Configuration classes in the classpath and apply their configuration.

        Here’s how it works:

        • Conditions for Auto Configuration: Auto Configuration works based on conditions defined in Spring Boot. For instance, if the application has a particular library in the classpath, it triggers the corresponding auto configuration. If the library isn’t found, that specific auto configuration is skipped.
        • Spring Boot Starter Projects: These starters (e.g., spring-boot-starter-web, spring-boot-starter-data-jpa) automatically bring in the right dependencies and configurations for your application, reducing the need to manually configure each component.

         

        III. How to Use Auto Configuration

        You don’t have to do anything special to use auto configuration. It is enabled by default in any Spring Boot application. All you need to do is add the right dependencies to your pom.xml (if using Maven) or build.gradle (if using Gradle). For example:

        <dependency>

            <groupId>org.springframework.boot</groupId>

            <artifactId>spring-boot-starter-data-jpa</artifactId>

        </dependency>

        Once you add the dependency, Spring Boot will automatically configure the necessary beans, and you can start using them right away without having to manually configure a DataSource, EntityManagerFactory, or TransactionManager.

         

        IV. Example of Auto Configuration in Action

        Let’s look at an example of auto-configuration when working with Spring Data JPA:

         1. Add Dependency: In your pom.xml file, add the following Spring Boot starter dependency for JPA.

        <dependency>

            <groupId>org.springframework.boot</groupId>

            <artifactId>spring-boot-starter-data-jpa</artifactId>

        </dependency>

         

         2. Configure Application Properties: Set up your database connection in application.properties.

        spring.datasource.url=jdbc:mysql://localhost:3306/mydb

        spring.datasource.username=root

        spring.datasource.password=root

        spring.jpa.hibernate.ddl-auto=update

         

        3. Use Auto-configured Beans: Spring Boot will automatically configure the DataSource, EntityManagerFactory, and other necessary JPA components, and you can start creating your repositories and entities without needing additional configuration.

        @Entity

        public class User {

            @Id

            @GeneratedValue(strategy = GenerationType.AUTO)

            private Long id;

            private String name;

        }

        public interface UserRepository extends JpaRepository<User, Long> {

        }

         

        V. Benefits of Auto Configuration

        • Reduced Boilerplate Code: Auto configuration eliminates the need for repetitive setup code for components such as databases, message brokers, and web services. You can focus on writing business logic instead of managing configurations.
        • Faster Setup: With auto configuration, your Spring Boot application is ready to run with minimal configuration effort. You don’t need to spend time wiring up individual components and dependencies.
        • Adaptable Configuration: If you ever need to modify the auto-configuration, you can override the defaults with your own configurations or even disable specific configurations if needed.

         

        VI. Conclusion

        Auto Configuration is one of the reasons why Spring Boot has become such a popular framework for Java developers. By automating the setup and configuration of various components, Spring Boot makes it easier to build and maintain complex applications. Its intelligent defaults allow you to spend more time developing your application and less time on configuration.

         

        Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver.

        Let's build something great together-reach out to us today. Or click here to explore more ISB Vietnam's case studies.

        [References]

        https://www.baeldung.com/spring-boot-annotations

        https://medium.com/@sharmapraveen91/comprehensive-guide-to-spring-annotations-under-the-hood-working-43e9570002c4

        https://www.freepik.com/free-vector/binary-code-number-system-concept-background-design_186442163.htm#fromView=search&page=1&position=1&uuid=15e181e5-42f1-47e8-ba48-21e737c4d5f6&query=java

         

        View More
        TECH

        November 28, 2025

        Optimizing C++ with Move Semantics

        In the world of C++ programming, performance is always a key factor. The language gives us fine-grained control over how data is allocated, copied, and freed. However, before C++11, handling large data structures often led to a serious issue: unnecessary copying. This inefficiency was one of the driving forces behind the introduction of Move Semantics.

        The Problem Before Move Semantics

        Imagine you have a std::vector<int> containing millions of elements. If you write a function that returns this vector, compilers before C++11 would create a full copy of the vector upon return. That’s because the only available mechanism was the copy constructor.

        This led to:

        • Huge performance costs (copying each element one by one).

        • Significant memory overhead (temporarily storing two copies).

        Example:

        For large data, this copy is not only wasteful but completely unnecessary since v is a temporary object that will be destroyed right after the function ends.

        What is Move Semantics?

        Move semantics is a mechanism introduced in C++11 that allows transferring resources instead of copying them. Instead of duplicating memory or file handles, the program simply transfers ownership of those resources from one object to another.

        In short:

        • Copy: makes a deep copy of data → slower.

        • Move: transfers ownership of data → much faster.

        Example:

        Here, b “steals” the memory buffer from a. After the move, a becomes empty but remains valid. No expensive copy takes place.

        How It Works

        Move semantics relies on the move constructor and move assignment operator, which are declared as:

        The T&& type is an rvalue reference, which binds to temporary objects. This allows the compiler to safely transfer resources instead of duplicating them.

        Example:

        Output:

        Time (with copy): 0.0082704 seconds

        Time (with move): 0.0000031 seconds

        Real Benefits

        1. Performance boost: no deep copy of large data.

        2. STL integration: all standard containers (std::vector, std::string, std::map, etc.) support move semantics.

        3. Essential for smart pointers: std::unique_ptr relies on move semantics to transfer ownership safely.

        Benchmarking often shows:

        • Copying may take 0.1 seconds for large objects.

        • Moving takes only tens of microseconds (0.00003 seconds).

        That’s a performance difference of several orders of magnitude.

        When to Use Move

        • When dealing with temporary objects.

        • When you need to transfer ownership rather than keep multiple copies.

        • When optimizing code that manages large data structures.

        Keep in mind: move semantics doesn’t replace copying — it gives you an additional, more efficient option.

        Conclusion

        Move semantics is one of the biggest advancements introduced in C++11. It makes C++ more modern and efficient without losing the low-level control that developers value. By mastering this feature, you can write code that is faster, leaner, and safer.

        If you were ever worried about returning large objects by value, worry no more. Since C++11, compilers prefer moving over copying, and you can explicitly use std::move to enforce it when needed.

        Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

        Ready to get started?

        Contact IVC for a free consultation and discover how we can help your business grow online.

        Contact IVC for a Free Consultation
        View More
        TECH

        November 28, 2025

        How to use asyncronus method in Spring Boot Application

        In Spring Boot applications, asynchronous processing helps increase system performance and responsiveness, especially when performing time-consuming tasks such as sending emails, logging, or executing external APIs.

        Instead of waiting for a task to complete, we can let it run in parallel in the background, allowing the main thread to continue processing other requests.
        SpringBoot supports this very simply through the @Async annotation.

         

        I. How to enable asynchronous in Spring Boot

        To use the asynchronous, you need to enable the asynchronous feature in your project using the @EnableAsync annotation

            import org.springframework.context.annotation.Configuration;
            import org.springframework.scheduling.annotation.EnableAsync;

            /**
             * The class AsyncConfig.
             */
            @Configuration
            @EnableAsync
            public class AsyncConfig {
            }

        This annotation tells Spring Boot that you want to use an asynchronous mechanism in your application.

        II. How to use the @Async

        Assume that you have a service that sends emails.
        If you send it synchronously, the request will have to wait for the email to be sent before responding — this is inefficient.
        We can improve this by adding @Async:

            @Service
            public class EmailService {
                @Async
                public void sendEmail (String recipient) {
                    System.out.println("Sending email to: "+ recipient);
                    try {
                        Thread.sleep(5000); // Simulate task takes much time
                    } catch (InterruptedException e) {
                        Thread.currentThread().interrupt();
                    }
                    System.out.println("Finished sending email to: "+ recipient);
                }
            }

        The controller can execute this service without waiting.

            @RestController
            @RequestMapping("/email")
            public class EmailController {
                private final EmailService emailService;


                public EmailController (EmailService emailService) {
                    this.emailService= emailService;
                }


                @PostMapping("/send")
                public String send (@RequestParam String recipient) {
                    emailService.sendEmail(recipient);
                    return"Request processed!";
                }
            }

        When
        the user executes the "/email" API, the response is returned immediately while the email sending continues in the background thread.

         

        III. Asynchronous processing with return values

        If you need to get the result back from an asynchronous method, you can use CompletableFuture

            @Async
            public CompletableFuture<String>fetchUserData() throws InterruptedException {
                Thread.sleep(2000);
                return CompletableFuture.completedFuture("User data");
            }

        You can use .get() or further process with .thenApply(), .thenAccept(), depending on your needs.
        To merge multiple asynchronous tasks, follow the steps below.
        Assume that you have two tasks that need to be executed in parallel, and only when both are completed should they continue — you can use CompletableFuture.allOf().

            @Async
            public CompletableFuture<String>getUserInfo() throws InterruptedException {
                Thread.sleep(2000);
                return CompletableFuture.completedFuture("User Info");
            }


            @Async
            public CompletableFuture<String>getOrderHistory() throws InterruptedException {
                Thread.sleep(3000);
                return CompletableFuture.completedFuture("Order History");
            }


            public StringmergeAsyncTasks() throws Exception {
                CompletableFuture<String> userInfo=getUserInfo();
                CompletableFuture<String> orderHistory=getOrderHistory();


                CompletableFuture.allOf(userInfo, orderHistory).join();


                return userInfo.get() +" | "+orderHistory.get();
            }

        As you can see, when executing the mergeAsyncTasks(), both methods will execute at the same time.
        Total processing time will be equal to the longest task only, instead of adding up each step.

        IV. Important Note

        @Async only works when executing from another bean; it doesn't execute internally in the same class.
        You can customize ThreadPoolTaskExecutor to limit the number of threads, timeouts, or queue size.
        @Async should only be used for tasks that do not require immediate results (like sending emails, writing logs, etc).

        V. Summary

        @Async is a powerful and easy-to-use tool in Spring Boot to handle asynchrony.
        By simply adding @EnableAsync and setting @Async on the method that needs to run in the background, you can make your application more responsive, take advantage of multi-threading, and improve the user experience.

        Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.


        [References]

        1. https://spring.io/guides/gs/async-method
        2. https://www.baeldung.com/spring-async
        3. https://medium.com/@bubu.tripathy/a-beginners-guide-to-async-processing-in-a-spring-boot-application-a4c785a992f2
        4. https://www.packtpub.com/en-us/learning/how-to-tutorials/solving-problems-with-spring-boot (Image source)
        View More
        TECH

        November 28, 2025

        The Engine of Modern DevOps: Jenkins and CI/CD

        If you are looking to modernize your workflow, automate your testing, or simply stop manually dragging files to a server, this guide is for you.

        What is CI/CD?

        Before diving into the tool, let's define the methodology.

        • Continuous Integration (CI): The practice of merging code changes into a central repository frequently (often multiple times a day). Each merge triggers an automated build and test sequence to detect bugs early.
        • Continuous Deployment/Delivery (CD): The extension of CI where code changes are automatically deployed to a testing and/or production environment after passing the build stage.

        Think of CI/CD as an assembly line. Instead of building a car by hand in a garage, you have a conveyor belt of robots (automation) that assemble, paint, and test the car. Jenkins is the software that controls those robots.

        Why Jenkins?

        Jenkins is an open-source automation server that enables developers around the world to reliably build, test, and deploy their software.

        • Extensibility: With over 1,800 plugins, Jenkins can integrate with almost any tool (Git, Docker, Kubernetes, Slack, Jira).
        • Pipeline as Code: You can define your entire build process in a text file (Jenkinsfile) stored alongside your code.
        • Community: As one of the oldest and most mature tools in the DevOps space, the community support and documentation are massive.

        How to Apply Jenkins to Your Project (Step-by-Step)

        Applying Jenkins to a project might seem daunting, but modern Jenkins (using "Declarative Pipelines") makes it straightforward. Here is the roadmap:

        1. The Setup

        First, you need a running instance of Jenkins. You can install it on a Linux server, runs it locally via a .war file, or, most commonly, run it inside a Docker container:

        docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts

        2. Pipeline as Code: Jenkinsfile

        The most robust way to apply Jenkins to your project is by creating a Jenkinsfile in the root of your Git repository. This file tells Jenkins exactly what to do.

        Here is a simple example of a declarative Pipeline for a Node.js application:

        pipeline {
        agent any
        stages {
        stage('Build') {
        steps {
        echo 'Installing dependencies...'
        sh 'npm install'
        }
        }
        stage('Test') {
        steps {
        echo 'Running UT...'
        sh 'npm test'
        }
        }
        stage('Deploy') {
        steps {
        echo 'Deploying to server...'
        // Add deployment scripts here 
        }
        }
        }
        post {
        always {
        echo 'Finished!'
        }
        failure {
        echo 'Something went wrong!'
        }
        }
        }

         

        3. Connect Jenkins to Git

        1. Go to your Jenkins Dashboard and click "New Item".
        2. Enter a project name and select "Multibranch Pipeline" (this is best practice as it can build different branches automatically).
        3. Under "Branch Sources," add your Git repository URL (GitHub, GitLab, Bitbucket).
        4. Save the project.

        Jenkins will now scan your repository. When it finds the Jenkinsfile, it will automatically trigger the build steps you defined (Build, Test, Deploy).

        4. Iterate and Optimize

        Once your pipeline is running, you can add complexity as needed:

        • Artifacts: Archive your build outputs (like  .jar or .exe files) so they can be downloaded later.
        • Notifications: Use plugins to send a message to Slack or Microsoft Teams if a build fails.
        • Docker Integration: Build your app inside a Docker container to ensure a clean environment every time.

        Conclusion

        Jenkins does the heavy lifting so you can focus on writing code. By automating the boring parts of software delivery—testing, building, and deploying—you reduce human error and ship features faster. Start with this guide, get your tests running automatically, and you will wonder how you ever managed without it.

        Ready to get started?

        Contact IVC for a free consultation and discover how we can help your business grow online.

        Contact IVC for a Free Consultation

         

        View More
        TECH

        November 28, 2025

        Debugging Linux Program with strace

        Debugging Linux Program with strace

        When a program on Linux is misbehaving, such as crashing, hanging, or failing silently, you may reach for gdb, logs, or guesswork.  Alternatively, there's another powerful, arguably underused tool: strace.

        strace intercepts and logs system calls made by a process, letting you peek into how it interacts with the OS. It’s simple, fast, and requires no code changes.


        Outline


        What Is strace?

        strace is a program on Linux that lets you determine what a program is doing without a debugger or source code.

        Specifically, strace shows you what a process is asking the Linux kernel to do; for example, this includes file operations (such as open, read, and write), network I/O (including connect, sendto, and recvfrom), process management (like fork, execve, and waitpid), and so on.


        Why does it matter?

        strace is worth exploring because it has proven invaluable for debugging programs. Being able to see how a program interacts with the kernel can give you a basic understanding of what is going on. Below are some of the representative use cases of strace:

        • Find the config file of the program
        • Find files that program depends on, such as dynamically linked libraries, root certificates, data source etc
        • Determine what happens during a program hang
        • Coarse profiling
        • Hidden permission error
        • Command arguments passed to other programs

        I hope this has sparked your interest in exploring this tool further. Up next, we'll dive into installation, basic usage and common patterns.


        Installation

        sudo apt install strace     # Debian/Ubuntu
        sudo yum install strace     # CentOS/RHEL
        

        Basic Usage

        # Trace a new program
        strace ./your_program
        
        # Attach to a running process
        strace -p <pid>
        
        # Redirect output to a file
        strace -o trace.log ./your_program
        

        Common Debugging Patterns

        1. Find the config file a program tries to load

        strace -e trace=openat ./your_program
        

        Purpose: Traces file opening syscalls using openat(), helping identify missing or mislocated configuration files.

        What to look for: Look for ENOENT errors indicating missing files.

        Example output:

        openat(AT_FDCWD, "/etc/your_program/config.yaml", O_RDONLY) = -1 ENOENT (No such file or directory)
        

        Explanation: The program attempted to open a config file that does not exist.


        2. Find dynamically loaded libraries, root certificates, or data files

        strace -e trace=file ./your_program
        

        Purpose: Captures file-related syscalls (open, stat, etc.), revealing all resources accessed.

        What to look for: Locate paths to important libraries, certs, or data files.

        Example output:

        openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libssl.so.1.1", O_RDONLY) = 3
        openat(AT_FDCWD, "/etc/ssl/certs/ca-certificates.crt", O_RDONLY) = 4
        

        Explanation: These lines show successful loading of required shared libraries and certificate files.


        3. Diagnose why a program hangs (e.g., blocking on I/O)

        strace -tt -T -p <PID>
        

        Purpose: Attaches to a running process and shows syscall durations and timestamps.

        What to look for: Find syscalls with unusually long durations.

        Example output:

        12:00:01.123456 read(4, ..., 4096) <120.000123>
        

        Explanation: A read() call took 120 seconds—likely waiting for input or a blocked pipe.


        4. Detect hidden permission errors

        strace -e trace=openat ./your_program
        

        Purpose: Reveals hidden EACCES errors that applications may suppress.

        What to look for: Failed attempts with EACCES indicating permission denied.

        Example output:

        openat(AT_FDCWD, "/var/log/secure.log", O_RDONLY) = -1 EACCES (Permission denied)
        

        Explanation: The program tried to read a log file it doesn't have permission to access.


        5. Capturing inter-process communication or exec chains

        strace -f -e execve ./script.sh
        

        Purpose: Traces all execve calls and follows child processes using -f.

        What to look for: Command-line arguments, incorrect binary paths.

        Example output:

        execve("/usr/bin/python3", ["python3", "-c", "print('Hello')"], ...) = 0
        

        Explanation: A Python subprocess was launched with an inline script.


        6. Profiling high-level syscall activity

        strace -c ./your_program
        

        Purpose: Displays syscall usage summary with counts and total time.

        What to look for: Time-consuming or frequent syscalls that affect performance.

        Example output:

        % time     seconds  usecs/call     calls    syscall
        ------     -------  -----------    -----    --------
         40.12    0.120000         120       100    read
         30.25    0.090000          90        50    write
        

        Explanation: Most time was spent on read() and write() syscalls.


        7. Uncovering undocumented file access

        strace -e trace=file ./your_program
        

        Purpose: Detects unexpected or undocumented files accessed during execution.

        What to look for: Configs, caches, or plugins accessed without explicit documentation.

        Example output:

        openat(AT_FDCWD, "/usr/share/your_program/theme.conf", O_RDONLY) = 3
        

        Explanation: The program accessed a theming configuration file not mentioned in docs.


        8. Investigating network behavior

        strace -e trace=network ./your_program
        

        Purpose: Monitors networking syscalls like connect(), sendto(), and recvfrom().

        What to look for: Refused connections, DNS resolution failures, unreachable hosts.

        Example output:

        connect(3, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("93.184.216.34")}, 16) = -1 ECONNREFUSED (Connection refused)
        

        Explanation: The program attempted to connect to a server but the connection was refused.


        9. Monitoring signals or crashes

        strace -e trace=signal ./your_program
        

        Purpose: Shows received signals, useful for detecting terminations and faults.

        What to look for: Signals like SIGSEGV, SIGBUS, SIGKILL indicating a crash or forced exit.

        Example output:

        --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x0} ---
        

        Explanation: A segmentation fault occurred due to a null pointer access.


        10. Audit child process creation (fork, clone, exec)

        strace -f -e trace=clone,execve ./parent_program
        

        Purpose: Traces child creation and execution using clone() and execve().

        What to look for: Verify subprocess execution flow and command arguments.

        Example output:

        clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, ...) = 1234
        execve("/bin/ls", ["ls", "-la"], ...) = 0
        

        Explanation: The parent process spawned a child which executed the ls -la command.


        Tip: Clean Output

        strace can be noisy. Use filters:

        strace -e trace=read,write,open,close ./prog
        

        Or output to file and search:

        strace -o log.txt ./prog
        grep ENOENT log.txt
        

        Sample Output: Running strace on curl

        To illustrate how strace works in practice, here’s a truncated real-world output of running strace on the curl command:

        strace curl https://example.com
        

        Essential system calls output:

        execve("/usr/bin/curl", ["curl", "https://example.com"], ...) = 0
        ...
        openat(AT_FDCWD, "/etc/ssl/certs/ca-certificates.crt", O_RDONLY) = 3
        socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 4
        connect(4, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("93.184.216.34")}, 16) = 0
        write(4, "GET / HTTP/1.1\r\nHost: example.com\r\n...", 96) = 96
        read(4, "HTTP/1.1 200 OK\r\n...", 4096) = 320
        ...
        close(4) = 0
        

        What this tells us:

        • execve: The initial execution of the curl binary.
        • openat: Loads root certificates to verify the HTTPS connection.
        • socket and connect: Establishes a TCP connection to the server.
        • write/read: Sends HTTP request and reads response.
        • close: Cleans up the socket connection.

        This example demonstrates how you can use strace to observe low-level behavior of everyday tools and debug or analyze them more deeply.


        Recap

        Use Case Key Flag What to Look For Common Errors
        Missing config -e openat ENOENT on config paths ENOENT
        Missing libraries -e file failed open for .so, certs ENOENT, EACCES
        Hang detection -tt -T long syscall durations
        Silent failure -e openat EACCES, hidden failures EACCES
        Debug scripts -f -e execve wrong args, missing paths
        Profile behavior -c dominant syscall timings
        Trace file access -e file undocumented config/data
        Network issues -e network failed connect, DNS ECONNREFUSED
        Crashes/signals -e signal SIGSEGV, SIGBUS

        Complementary Debugging Tools

        While strace gives insight into system calls, combining it with other tools offers a fuller picture:

        lsof: List Open Files

        lsof -p <pid>
        

        See which files, sockets, and devices a process is using. Great for checking if the process is stuck on a file or network call.

        ps + top + htop: Process Status

        • ps aux shows current process states.
        • top and htop give a live view into CPU, memory, and I/O usage.
        • htop allows filtering and killing processes interactively.

        gdb: Interactive Debugging

        Use gdb if you need to inspect memory, variables, and stack traces.

        gdb ./app
        (gdb) run
        (gdb) bt  # backtrace on crash
        

        perf: Performance Analysis

        Find hotspots and performance bottlenecks.

        perf top
        perf record ./your_program
        perf report
        

        dmesg and journalctl: Kernel Logs

        Check for kernel-level errors:

        dmesg | tail
        journalctl -xe
        

        These can reveal permission denials, segmentation faults, or system-wide resource issues.


        Summary

        Problem What to Look For With strace
        Crash/Error Last syscalls, missing files
        Hangs/Timeout Long gaps between calls
        Wrong paths open, access, stat on wrong files
        Permission issue EACCES in open or access calls

        strace is an indispensable tool for Linux developers. It’s not a full debugger, but often it’s the only tool you need. Combine it with tools like lsof, gdb, htop, and perf for deeper diagnosis.


        Next time your program fails silently, try this:

        strace ./your_program
        

        References

        Ready to get started?

        Contact IVC for a free consultation and discover how we can help your business grow online.

        Contact IVC for a Free Consultation
        View More
        1 5 6 7 8 9 25
        Let's explore a Partnership Opportunity

        CONTACT US



        At ISB Vietnam, we are always open to exploring new partnership opportunities.

        If you're seeking a reliable, long-term partner who values collaboration and shared growth, we'd be happy to connect and discuss how we can work together.

        Add the attachment *Up to 10MB