...

What We Think

Blog

Keep up with the latest in technological advancements and business strategies, with thought leadership articles contributed by our staff.
TECH

November 28, 2025

AngularJS & Angular How To Run Together

I. Background – Why Run Two Versions at Once

AngularJS (1.x) was once a very popular front-end framework, and many applications built with it still run smoothly today.
As technology evolves, teams want to move to modern Angular (2+) for its TypeScript support, cleaner architecture, better tools, and long-term maintenance.
However, rewriting a large AngularJS project from scratch can be time-consuming and risky.
That’s why many developers choose to run AngularJS and Angular together in a hybrid setup — this approach saves time and costs while still ensuring an effective migration process and keeping the system running normally.

II. The Official Tool – @angular/upgrade

To make AngularJS and Angular work together, the Angular team released an official package called @angular/upgrade.
It acts as a bridge between the two frameworks, allowing them to share the same DOM, services, and data.

You can install it easily:

  npm install @angular/upgrade @angular/upgrade-static

With this tool, you can:

  • Start (bootstrap) both frameworks at the same time.
  • Use Angular components inside AngularJS (downgrade).
  • Use AngularJS services inside Angular (upgrade).
  • Let both frameworks communicate smoothly in one app.

This is an official and stable migration solution, fully supported by the Angular team — not a workaround or a temporary solution.

III. Step-by-Step Implementation


Step 1: Bootstrap Both Frameworks

In your main entry file, initialize Angular and AngularJS to run together:

View More
TECH

November 28, 2025

SPRING BOOT AUTO CONFIGURATION: SIMPLIFYING CONFIGURATION FOR DEVELOPERS

In the world of modern web application development, configuration can be a tedious and time-consuming task, especially when it comes to setting up various services, databases, and libraries. One of the standout features of Spring Boot is its auto configuration, which significantly simplifies this process. Let’s dive into what auto configuration is and how it can improve your development workflow.

 

I. What is Auto Configuration in Spring Boot?

Auto Configuration is one of the most powerful features of Spring Boot. It’s designed to automatically configure application components based on the libraries that are present in the classpath. Spring Boot’s auto-configuration mechanism attempts to guess and set up the most sensible configuration based on the environment and the dependencies that you are using in your application.

For example, if you are using Spring Data JPA in your application, Spring Boot will automatically configure the EntityManagerFactory, a datasource, and other required beans based on the properties you define in your application.properties file. This significantly reduces the amount of manual configuration and setup.

 

II. How Does Auto Configuration Work?

Spring Boot uses the @EnableAutoConfiguration annotation (which is included by default in @SpringBootApplication) to enable this feature. This annotation tells Spring Boot to look for @Configuration classes in the classpath and apply their configuration.

Here’s how it works:

  • Conditions for Auto Configuration: Auto Configuration works based on conditions defined in Spring Boot. For instance, if the application has a particular library in the classpath, it triggers the corresponding auto configuration. If the library isn’t found, that specific auto configuration is skipped.
  • Spring Boot Starter Projects: These starters (e.g., spring-boot-starter-web, spring-boot-starter-data-jpa) automatically bring in the right dependencies and configurations for your application, reducing the need to manually configure each component.

 

III. How to Use Auto Configuration

You don’t have to do anything special to use auto configuration. It is enabled by default in any Spring Boot application. All you need to do is add the right dependencies to your pom.xml (if using Maven) or build.gradle (if using Gradle). For example:

<dependency>

    <groupId>org.springframework.boot</groupId>

    <artifactId>spring-boot-starter-data-jpa</artifactId>

</dependency>

Once you add the dependency, Spring Boot will automatically configure the necessary beans, and you can start using them right away without having to manually configure a DataSource, EntityManagerFactory, or TransactionManager.

 

IV. Example of Auto Configuration in Action

Let’s look at an example of auto-configuration when working with Spring Data JPA:

 1. Add Dependency: In your pom.xml file, add the following Spring Boot starter dependency for JPA.

<dependency>

    <groupId>org.springframework.boot</groupId>

    <artifactId>spring-boot-starter-data-jpa</artifactId>

</dependency>

 

 2. Configure Application Properties: Set up your database connection in application.properties.

spring.datasource.url=jdbc:mysql://localhost:3306/mydb

spring.datasource.username=root

spring.datasource.password=root

spring.jpa.hibernate.ddl-auto=update

 

3. Use Auto-configured Beans: Spring Boot will automatically configure the DataSource, EntityManagerFactory, and other necessary JPA components, and you can start creating your repositories and entities without needing additional configuration.

@Entity

public class User {

    @Id

    @GeneratedValue(strategy = GenerationType.AUTO)

    private Long id;

    private String name;

}

public interface UserRepository extends JpaRepository<User, Long> {

}

 

V. Benefits of Auto Configuration

  • Reduced Boilerplate Code: Auto configuration eliminates the need for repetitive setup code for components such as databases, message brokers, and web services. You can focus on writing business logic instead of managing configurations.
  • Faster Setup: With auto configuration, your Spring Boot application is ready to run with minimal configuration effort. You don’t need to spend time wiring up individual components and dependencies.
  • Adaptable Configuration: If you ever need to modify the auto-configuration, you can override the defaults with your own configurations or even disable specific configurations if needed.

 

VI. Conclusion

Auto Configuration is one of the reasons why Spring Boot has become such a popular framework for Java developers. By automating the setup and configuration of various components, Spring Boot makes it easier to build and maintain complex applications. Its intelligent defaults allow you to spend more time developing your application and less time on configuration.

 

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver.

Let's build something great together-reach out to us today. Or click here to explore more ISB Vietnam's case studies.

[References]

https://www.baeldung.com/spring-boot-annotations

https://medium.com/@sharmapraveen91/comprehensive-guide-to-spring-annotations-under-the-hood-working-43e9570002c4

https://www.freepik.com/free-vector/binary-code-number-system-concept-background-design_186442163.htm#fromView=search&page=1&position=1&uuid=15e181e5-42f1-47e8-ba48-21e737c4d5f6&query=java

 

View More
TECH

November 28, 2025

Optimizing C++ with Move Semantics

In the world of C++ programming, performance is always a key factor. The language gives us fine-grained control over how data is allocated, copied, and freed. However, before C++11, handling large data structures often led to a serious issue: unnecessary copying. This inefficiency was one of the driving forces behind the introduction of Move Semantics.

The Problem Before Move Semantics

Imagine you have a std::vector<int> containing millions of elements. If you write a function that returns this vector, compilers before C++11 would create a full copy of the vector upon return. That’s because the only available mechanism was the copy constructor.

This led to:

  • Huge performance costs (copying each element one by one).

  • Significant memory overhead (temporarily storing two copies).

Example:

For large data, this copy is not only wasteful but completely unnecessary since v is a temporary object that will be destroyed right after the function ends.

What is Move Semantics?

Move semantics is a mechanism introduced in C++11 that allows transferring resources instead of copying them. Instead of duplicating memory or file handles, the program simply transfers ownership of those resources from one object to another.

In short:

  • Copy: makes a deep copy of data → slower.

  • Move: transfers ownership of data → much faster.

Example:

Here, b “steals” the memory buffer from a. After the move, a becomes empty but remains valid. No expensive copy takes place.

How It Works

Move semantics relies on the move constructor and move assignment operator, which are declared as:

The T&& type is an rvalue reference, which binds to temporary objects. This allows the compiler to safely transfer resources instead of duplicating them.

Example:

Output:

Time (with copy): 0.0082704 seconds

Time (with move): 0.0000031 seconds

Real Benefits

  1. Performance boost: no deep copy of large data.

  2. STL integration: all standard containers (std::vector, std::string, std::map, etc.) support move semantics.

  3. Essential for smart pointers: std::unique_ptr relies on move semantics to transfer ownership safely.

Benchmarking often shows:

  • Copying may take 0.1 seconds for large objects.

  • Moving takes only tens of microseconds (0.00003 seconds).

That’s a performance difference of several orders of magnitude.

When to Use Move

  • When dealing with temporary objects.

  • When you need to transfer ownership rather than keep multiple copies.

  • When optimizing code that manages large data structures.

Keep in mind: move semantics doesn’t replace copying — it gives you an additional, more efficient option.

Conclusion

Move semantics is one of the biggest advancements introduced in C++11. It makes C++ more modern and efficient without losing the low-level control that developers value. By mastering this feature, you can write code that is faster, leaner, and safer.

If you were ever worried about returning large objects by value, worry no more. Since C++11, compilers prefer moving over copying, and you can explicitly use std::move to enforce it when needed.

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

Ready to get started?

Contact IVC for a free consultation and discover how we can help your business grow online.

Contact IVC for a Free Consultation
View More
TECH

November 28, 2025

How to use asyncronus method in Spring Boot Application

In Spring Boot applications, asynchronous processing helps increase system performance and responsiveness, especially when performing time-consuming tasks such as sending emails, logging, or executing external APIs.

Instead of waiting for a task to complete, we can let it run in parallel in the background, allowing the main thread to continue processing other requests.
SpringBoot supports this very simply through the @Async annotation.

 

I. How to enable asynchronous in Spring Boot

To use the asynchronous, you need to enable the asynchronous feature in your project using the @EnableAsync annotation

    import org.springframework.context.annotation.Configuration;
    import org.springframework.scheduling.annotation.EnableAsync;

    /**
     * The class AsyncConfig.
     */
    @Configuration
    @EnableAsync
    public class AsyncConfig {
    }

This annotation tells Spring Boot that you want to use an asynchronous mechanism in your application.

II. How to use the @Async

Assume that you have a service that sends emails.
If you send it synchronously, the request will have to wait for the email to be sent before responding — this is inefficient.
We can improve this by adding @Async:

    @Service
    public class EmailService {
        @Async
        public void sendEmail (String recipient) {
            System.out.println("Sending email to: "+ recipient);
            try {
                Thread.sleep(5000); // Simulate task takes much time
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
            System.out.println("Finished sending email to: "+ recipient);
        }
    }

The controller can execute this service without waiting.

    @RestController
    @RequestMapping("/email")
    public class EmailController {
        private final EmailService emailService;


        public EmailController (EmailService emailService) {
            this.emailService= emailService;
        }


        @PostMapping("/send")
        public String send (@RequestParam String recipient) {
            emailService.sendEmail(recipient);
            return"Request processed!";
        }
    }

When
the user executes the "/email" API, the response is returned immediately while the email sending continues in the background thread.

 

III. Asynchronous processing with return values

If you need to get the result back from an asynchronous method, you can use CompletableFuture

    @Async
    public CompletableFuture<String>fetchUserData() throws InterruptedException {
        Thread.sleep(2000);
        return CompletableFuture.completedFuture("User data");
    }

You can use .get() or further process with .thenApply(), .thenAccept(), depending on your needs.
To merge multiple asynchronous tasks, follow the steps below.
Assume that you have two tasks that need to be executed in parallel, and only when both are completed should they continue — you can use CompletableFuture.allOf().

    @Async
    public CompletableFuture<String>getUserInfo() throws InterruptedException {
        Thread.sleep(2000);
        return CompletableFuture.completedFuture("User Info");
    }


    @Async
    public CompletableFuture<String>getOrderHistory() throws InterruptedException {
        Thread.sleep(3000);
        return CompletableFuture.completedFuture("Order History");
    }


    public StringmergeAsyncTasks() throws Exception {
        CompletableFuture<String> userInfo=getUserInfo();
        CompletableFuture<String> orderHistory=getOrderHistory();


        CompletableFuture.allOf(userInfo, orderHistory).join();


        return userInfo.get() +" | "+orderHistory.get();
    }

As you can see, when executing the mergeAsyncTasks(), both methods will execute at the same time.
Total processing time will be equal to the longest task only, instead of adding up each step.

IV. Important Note

@Async only works when executing from another bean; it doesn't execute internally in the same class.
You can customize ThreadPoolTaskExecutor to limit the number of threads, timeouts, or queue size.
@Async should only be used for tasks that do not require immediate results (like sending emails, writing logs, etc).

V. Summary

@Async is a powerful and easy-to-use tool in Spring Boot to handle asynchrony.
By simply adding @EnableAsync and setting @Async on the method that needs to run in the background, you can make your application more responsive, take advantage of multi-threading, and improve the user experience.

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.


[References]

  1. https://spring.io/guides/gs/async-method
  2. https://www.baeldung.com/spring-async
  3. https://medium.com/@bubu.tripathy/a-beginners-guide-to-async-processing-in-a-spring-boot-application-a4c785a992f2
  4. https://www.packtpub.com/en-us/learning/how-to-tutorials/solving-problems-with-spring-boot (Image source)
View More
TECH

November 28, 2025

The Engine of Modern DevOps: Jenkins and CI/CD

If you are looking to modernize your workflow, automate your testing, or simply stop manually dragging files to a server, this guide is for you.

What is CI/CD?

Before diving into the tool, let's define the methodology.

  • Continuous Integration (CI): The practice of merging code changes into a central repository frequently (often multiple times a day). Each merge triggers an automated build and test sequence to detect bugs early.
  • Continuous Deployment/Delivery (CD): The extension of CI where code changes are automatically deployed to a testing and/or production environment after passing the build stage.

Think of CI/CD as an assembly line. Instead of building a car by hand in a garage, you have a conveyor belt of robots (automation) that assemble, paint, and test the car. Jenkins is the software that controls those robots.

Why Jenkins?

Jenkins is an open-source automation server that enables developers around the world to reliably build, test, and deploy their software.

  • Extensibility: With over 1,800 plugins, Jenkins can integrate with almost any tool (Git, Docker, Kubernetes, Slack, Jira).
  • Pipeline as Code: You can define your entire build process in a text file (Jenkinsfile) stored alongside your code.
  • Community: As one of the oldest and most mature tools in the DevOps space, the community support and documentation are massive.

How to Apply Jenkins to Your Project (Step-by-Step)

Applying Jenkins to a project might seem daunting, but modern Jenkins (using "Declarative Pipelines") makes it straightforward. Here is the roadmap:

1. The Setup

First, you need a running instance of Jenkins. You can install it on a Linux server, runs it locally via a .war file, or, most commonly, run it inside a Docker container:

docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts

2. Pipeline as Code: Jenkinsfile

The most robust way to apply Jenkins to your project is by creating a Jenkinsfile in the root of your Git repository. This file tells Jenkins exactly what to do.

Here is a simple example of a declarative Pipeline for a Node.js application:

pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Installing dependencies...'
sh 'npm install'
}
}
stage('Test') {
steps {
echo 'Running UT...'
sh 'npm test'
}
}
stage('Deploy') {
steps {
echo 'Deploying to server...'
// Add deployment scripts here 
}
}
}
post {
always {
echo 'Finished!'
}
failure {
echo 'Something went wrong!'
}
}
}

 

3. Connect Jenkins to Git

  1. Go to your Jenkins Dashboard and click "New Item".
  2. Enter a project name and select "Multibranch Pipeline" (this is best practice as it can build different branches automatically).
  3. Under "Branch Sources," add your Git repository URL (GitHub, GitLab, Bitbucket).
  4. Save the project.

Jenkins will now scan your repository. When it finds the Jenkinsfile, it will automatically trigger the build steps you defined (Build, Test, Deploy).

4. Iterate and Optimize

Once your pipeline is running, you can add complexity as needed:

  • Artifacts: Archive your build outputs (like  .jar or .exe files) so they can be downloaded later.
  • Notifications: Use plugins to send a message to Slack or Microsoft Teams if a build fails.
  • Docker Integration: Build your app inside a Docker container to ensure a clean environment every time.

Conclusion

Jenkins does the heavy lifting so you can focus on writing code. By automating the boring parts of software delivery—testing, building, and deploying—you reduce human error and ship features faster. Start with this guide, get your tests running automatically, and you will wonder how you ever managed without it.

Ready to get started?

Contact IVC for a free consultation and discover how we can help your business grow online.

Contact IVC for a Free Consultation

 

View More
TECH

November 28, 2025

Debugging Linux Program with strace

Debugging Linux Program with strace

When a program on Linux is misbehaving, such as crashing, hanging, or failing silently, you may reach for gdb, logs, or guesswork.  Alternatively, there's another powerful, arguably underused tool: strace.

strace intercepts and logs system calls made by a process, letting you peek into how it interacts with the OS. It’s simple, fast, and requires no code changes.


Outline


What Is strace?

strace is a program on Linux that lets you determine what a program is doing without a debugger or source code.

Specifically, strace shows you what a process is asking the Linux kernel to do; for example, this includes file operations (such as open, read, and write), network I/O (including connect, sendto, and recvfrom), process management (like fork, execve, and waitpid), and so on.


Why does it matter?

strace is worth exploring because it has proven invaluable for debugging programs. Being able to see how a program interacts with the kernel can give you a basic understanding of what is going on. Below are some of the representative use cases of strace:

  • Find the config file of the program
  • Find files that program depends on, such as dynamically linked libraries, root certificates, data source etc
  • Determine what happens during a program hang
  • Coarse profiling
  • Hidden permission error
  • Command arguments passed to other programs

I hope this has sparked your interest in exploring this tool further. Up next, we'll dive into installation, basic usage and common patterns.


Installation

sudo apt install strace     # Debian/Ubuntu
sudo yum install strace     # CentOS/RHEL

Basic Usage

# Trace a new program
strace ./your_program

# Attach to a running process
strace -p <pid>

# Redirect output to a file
strace -o trace.log ./your_program

Common Debugging Patterns

1. Find the config file a program tries to load

strace -e trace=openat ./your_program

Purpose: Traces file opening syscalls using openat(), helping identify missing or mislocated configuration files.

What to look for: Look for ENOENT errors indicating missing files.

Example output:

openat(AT_FDCWD, "/etc/your_program/config.yaml", O_RDONLY) = -1 ENOENT (No such file or directory)

Explanation: The program attempted to open a config file that does not exist.


2. Find dynamically loaded libraries, root certificates, or data files

strace -e trace=file ./your_program

Purpose: Captures file-related syscalls (open, stat, etc.), revealing all resources accessed.

What to look for: Locate paths to important libraries, certs, or data files.

Example output:

openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libssl.so.1.1", O_RDONLY) = 3
openat(AT_FDCWD, "/etc/ssl/certs/ca-certificates.crt", O_RDONLY) = 4

Explanation: These lines show successful loading of required shared libraries and certificate files.


3. Diagnose why a program hangs (e.g., blocking on I/O)

strace -tt -T -p <PID>

Purpose: Attaches to a running process and shows syscall durations and timestamps.

What to look for: Find syscalls with unusually long durations.

Example output:

12:00:01.123456 read(4, ..., 4096) <120.000123>

Explanation: A read() call took 120 seconds—likely waiting for input or a blocked pipe.


4. Detect hidden permission errors

strace -e trace=openat ./your_program

Purpose: Reveals hidden EACCES errors that applications may suppress.

What to look for: Failed attempts with EACCES indicating permission denied.

Example output:

openat(AT_FDCWD, "/var/log/secure.log", O_RDONLY) = -1 EACCES (Permission denied)

Explanation: The program tried to read a log file it doesn't have permission to access.


5. Capturing inter-process communication or exec chains

strace -f -e execve ./script.sh

Purpose: Traces all execve calls and follows child processes using -f.

What to look for: Command-line arguments, incorrect binary paths.

Example output:

execve("/usr/bin/python3", ["python3", "-c", "print('Hello')"], ...) = 0

Explanation: A Python subprocess was launched with an inline script.


6. Profiling high-level syscall activity

strace -c ./your_program

Purpose: Displays syscall usage summary with counts and total time.

What to look for: Time-consuming or frequent syscalls that affect performance.

Example output:

% time     seconds  usecs/call     calls    syscall
------     -------  -----------    -----    --------
 40.12    0.120000         120       100    read
 30.25    0.090000          90        50    write

Explanation: Most time was spent on read() and write() syscalls.


7. Uncovering undocumented file access

strace -e trace=file ./your_program

Purpose: Detects unexpected or undocumented files accessed during execution.

What to look for: Configs, caches, or plugins accessed without explicit documentation.

Example output:

openat(AT_FDCWD, "/usr/share/your_program/theme.conf", O_RDONLY) = 3

Explanation: The program accessed a theming configuration file not mentioned in docs.


8. Investigating network behavior

strace -e trace=network ./your_program

Purpose: Monitors networking syscalls like connect(), sendto(), and recvfrom().

What to look for: Refused connections, DNS resolution failures, unreachable hosts.

Example output:

connect(3, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("93.184.216.34")}, 16) = -1 ECONNREFUSED (Connection refused)

Explanation: The program attempted to connect to a server but the connection was refused.


9. Monitoring signals or crashes

strace -e trace=signal ./your_program

Purpose: Shows received signals, useful for detecting terminations and faults.

What to look for: Signals like SIGSEGV, SIGBUS, SIGKILL indicating a crash or forced exit.

Example output:

--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x0} ---

Explanation: A segmentation fault occurred due to a null pointer access.


10. Audit child process creation (fork, clone, exec)

strace -f -e trace=clone,execve ./parent_program

Purpose: Traces child creation and execution using clone() and execve().

What to look for: Verify subprocess execution flow and command arguments.

Example output:

clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, ...) = 1234
execve("/bin/ls", ["ls", "-la"], ...) = 0

Explanation: The parent process spawned a child which executed the ls -la command.


Tip: Clean Output

strace can be noisy. Use filters:

strace -e trace=read,write,open,close ./prog

Or output to file and search:

strace -o log.txt ./prog
grep ENOENT log.txt

Sample Output: Running strace on curl

To illustrate how strace works in practice, here’s a truncated real-world output of running strace on the curl command:

strace curl https://example.com

Essential system calls output:

execve("/usr/bin/curl", ["curl", "https://example.com"], ...) = 0
...
openat(AT_FDCWD, "/etc/ssl/certs/ca-certificates.crt", O_RDONLY) = 3
socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 4
connect(4, {sa_family=AF_INET, sin_port=htons(443), sin_addr=inet_addr("93.184.216.34")}, 16) = 0
write(4, "GET / HTTP/1.1\r\nHost: example.com\r\n...", 96) = 96
read(4, "HTTP/1.1 200 OK\r\n...", 4096) = 320
...
close(4) = 0

What this tells us:

  • execve: The initial execution of the curl binary.
  • openat: Loads root certificates to verify the HTTPS connection.
  • socket and connect: Establishes a TCP connection to the server.
  • write/read: Sends HTTP request and reads response.
  • close: Cleans up the socket connection.

This example demonstrates how you can use strace to observe low-level behavior of everyday tools and debug or analyze them more deeply.


Recap

Use Case Key Flag What to Look For Common Errors
Missing config -e openat ENOENT on config paths ENOENT
Missing libraries -e file failed open for .so, certs ENOENT, EACCES
Hang detection -tt -T long syscall durations
Silent failure -e openat EACCES, hidden failures EACCES
Debug scripts -f -e execve wrong args, missing paths
Profile behavior -c dominant syscall timings
Trace file access -e file undocumented config/data
Network issues -e network failed connect, DNS ECONNREFUSED
Crashes/signals -e signal SIGSEGV, SIGBUS

Complementary Debugging Tools

While strace gives insight into system calls, combining it with other tools offers a fuller picture:

lsof: List Open Files

lsof -p <pid>

See which files, sockets, and devices a process is using. Great for checking if the process is stuck on a file or network call.

ps + top + htop: Process Status

  • ps aux shows current process states.
  • top and htop give a live view into CPU, memory, and I/O usage.
  • htop allows filtering and killing processes interactively.

gdb: Interactive Debugging

Use gdb if you need to inspect memory, variables, and stack traces.

gdb ./app
(gdb) run
(gdb) bt  # backtrace on crash

perf: Performance Analysis

Find hotspots and performance bottlenecks.

perf top
perf record ./your_program
perf report

dmesg and journalctl: Kernel Logs

Check for kernel-level errors:

dmesg | tail
journalctl -xe

These can reveal permission denials, segmentation faults, or system-wide resource issues.


Summary

Problem What to Look For With strace
Crash/Error Last syscalls, missing files
Hangs/Timeout Long gaps between calls
Wrong paths open, access, stat on wrong files
Permission issue EACCES in open or access calls

strace is an indispensable tool for Linux developers. It’s not a full debugger, but often it’s the only tool you need. Combine it with tools like lsof, gdb, htop, and perf for deeper diagnosis.


Next time your program fails silently, try this:

strace ./your_program

References

Ready to get started?

Contact IVC for a free consultation and discover how we can help your business grow online.

Contact IVC for a Free Consultation
View More
TECH

November 28, 2025

10 JavaScript Tricks to Level Up Your Code!

Looking to level up your JavaScript coding skills? Whether you're a beginner just starting out or a seasoned pro with years of experience, there's always room for improvement. JavaScript is a dynamic and ever-evolving language, and mastering it means constantly discovering new techniques, shortcuts, and best practices. In this post, we’ll unveil 10 powerful JavaScript tricks that can supercharge your skills and help you code like a professional. Whether you’re just starting or looking to level up, these tips will sharpen your problem-solving skills, improve your code efficiency, and make your journey as a developer smoother.. Ready to learn something new? Let’s dive in!

I. Object and Array Destructuring

Destructuring in JavaScript is a convenient way to extract values from arrays or objects and assign them to variables in a more readable and concise manner. It helps you avoid repetitive code when accessing values, making your code cleaner and more efficient.
 
  • Array Destructuring: Array destructuring allows you to unpack values from an array and assign them to variables.
Here's an example:
    const numbers = [10, 20, 30, 40];

    // Without destructuring:
    const first = numbers[0];
    const second = numbers[1];

    // With destructuring:
    const [first, second] = numbers;

    console.log(first);  // 10
    console.log(second); // 20
 
  • Object Destructuring: With object destructuring, you can extract specific properties from an object and assign them to variables.
Here's an example:
    const person = { name: "Alice", age: 25 };

    // Without destructuring:
    const name = person.name;
    const age = person.age;

    // With destructuring:
    const { name, age } = person;

    console.log(name); // Alice
    console.log(age);  // 25

II. Spread operator

The spread operator (...) in JavaScript is a powerful and versatile feature that allows you to "spread" the elements of an array or object into individual elements or properties. It can be used in multiple scenarios to copy, combine, or manipulate arrays and objects more efficiently. The spread operator is particularly useful for working with iterable data structures like arrays and objects.
 
  • Array Spread: The spread operator can be used to copy elements from one array to another or to combine multiple arrays.
Copying an array:
    const arr1 = [1, 2, 3];
    const arr2 = [...arr1];  // Spread arr1 into arr2

    console.log(arr2);  // [1, 2, 3]
 
Combining arrays:
    const arr1 = [1, 2, 3];
    const arr2 = [4, 5, 6];
    const combined = [...arr1, ...arr2];
    console.log(combined);  // [1, 2, 3, 4, 5, 6]
 
Adding new elements to an array:
    const arr1 = [1, 2, 3];
    const newArr = [...arr1, 4, 5];
    console.log(newArr);  // [1, 2, 3, 4, 5]
 
  • Object Spread: The spread operator can also be used to copy the properties of one object to another or to merge multiple objects.
Copying an object:
    const person = { name: "Alice", age: 25 };
    const personCopy = { ...person };

    console.log(personCopy);  // { name: "Alice", age: 25 }
 
Merging objects:
    const person = { name: "Alice", age: 25 };
    const job = { title: "Developer", company: "XYZ Corp" };
    const merged = { ...person, ...job };

    console.log(merged);  
    //Output: { name: "Alice", age: 25, title: "Developer", company: "XYZ Corp" }
 
Overriding object properties:
    const person = { name: "Alice", age: 25 };
    const updatedPerson = { ...person, age: 26 };

    console.log(updatedPerson);  // { name: "Alice", age: 26 }
 
Using the Spread Operator with Function Arguments:
You can also use the spread operator to pass an array of elements as individual arguments to a function.
    // Create a function to multiply three items
    function multiply(a, b, c) {
        return a * b * c
    }

    // Without spread operator
    multiply(1, 2, 3) ;// 6

    // With spread operator
    const numbers = [1, 2, 3]
    multiply(...numbers);//6

III. Rest parameter

Rest Parameters (...) in JavaScript allow you to collect multiple function arguments into a single array, enabling more flexible and dynamic function definitions. Unlike the spread operator, which expands arrays or objects into individual elements, rest parameters "collect" individual arguments into an array.
    function restTest(...args) {
        console.log(args)
    }

    restTest(1, 2, 3, 4, 5, 6);// [1, 2, 3, 4, 5, 6]

    // Rest Parameters with Other Parameters
    function displayInfo(name, age, ...hobbies) {
        console.log(`Name: ${name}, Age: ${age}`);
        console.log(`Hobbies: ${hobbies.join(", ")}`);
    }

    displayInfo("Alice", 30, "Reading", "Swimming", "Hiking");
    // Output:
    // Name: Alice, Age: 30
    // Hobbies: Reading, Swimming, Hiking

IV. Template Literals (``)

Template Literals, introduced in JavaScript (ES6), are a modern way to create strings. Allows for embedding expressions inside strings using backticks (). Supports multi-line strings without special characters and makes string concatenation much cleaner.
    const greeting = `Hello, ${name}!
    Welcome to the next line.`;
    console.log(greeting);
    // Output:
    // Hello, Bob!
    // Welcome to the next line.

V. Arrow Functions (=>)

Offer a more concise syntax for writing function expressions. Crucially, they do not bind their own this, arguments, super, or new.target. Their this value is lexically bound (it's the this value of the enclosing execution context).
    // Traditional function
    const add = function(a, b) {
        return a + b;
    };

    // Arrow function (single expression, implicit return)
    const addArrow = (a, b) => a + b;

    // Arrow function (multiple statements, explicit return)
    const multiply = (a, b) => {
        const result = a * b;
        return result;
    };

VI. Ternary Operator (?:)

A shorthand for simple if...else statements. It's ideal for conditional assignments or returning values based on a condition.
    const age = 20;
    const status = age >= 18 ? "Adult" : "Minor";
    console.log(status); // "Adult"

VII. Nullish Coalescing Operator (??)

The Nullish Coalescing Operator (??) in JavaScript returns the right-hand value only if the left-hand value is null or undefined.
    // Syntax
    let result = a ?? b;
    - If a is not null or undefined, result will be a.
    - If a is null or undefined, result will be b.

    // Example
    let username = null;
    let defaultName = "Guest";
   
    let nameToDisplay = username ?? defaultName;
    console.log(nameToDisplay); // "Guest"

VIII. Optional Chaining (?.)

The Optional Chaining Operator (?.) in JavaScript lets you safely access deeply nested properties without having to check each level manually. If the value before ?. is null or undefined, the expression short-circuits and returns undefined instead of throwing an error.
    // Example
    const user = {
        profile: {
            name: "Alice",
        },
    };
   
    console.log(user.profile?.name);     // "Alice"
    console.log(user.settings?.theme);   // undefined (no error)


    const user = {
        greet() {
            return "Hello!";
        }
    };
   
    console.log(user.greet?.());      // "Hello!"
    console.log(user.sayBye?.());     // undefined (no error)

IX. Array Methods (map, filter, reduce, etc.)

Mastering built-in array methods is crucial for functional programming patterns and writing cleaner code compared to traditional for loops for many tasks.
//.map() – Transforms each element in an array
    const nums = [1, 2, 3];
    const squares = nums.map(n => n * n); // [1, 4, 9]
//.filter() – Filters elements based on a condition
    const nums = [1, 2, 3, 4];
    const evens = numbers.filter(num => num % 2 === 0); // [2, 4]

//.reduce() - Aggregates array elements into a single value
    const nums = [1, 2, 3, 4];
    const sum = nums.reduce((acc, cur) => acc + cur, 0); // 10

//.find() – Finds the first element that matches a condition
    const users = [{ id: 1 }, { id: 2 }];
    const user = users.find(u => u.id === 2); // { id: 2 }

//.some() – Returns true if any element matches
    const hasNegative = [1, -1, 3].some(n => n < 0); // true

//.every() – Returns true if all elements match
    const allPositive = [1, 2, 3].every(n => n > 0); // true

//.forEach() – Executes a function for each item (no return)
    [1, 2, 3].forEach(n => console.log(n * 2)); // 2, 4, 6

//.includes() – Checks for existence
    [1, 2, 3].includes(2); // true
Why Use Functional Array Methods?
Cleaner syntax with less boilerplate
Immutable patterns (avoiding side effects)
Better readability and expressiveness
Easier to compose operations
 

X. Using Set for Unique Values

The Set object allows you to store unique values of any type. It's a super-fast way to get a list of unique items from an array.
    const nums = [1, 2, 2, 3];
    const unique = [...new Set(nums)]; // [1, 2, 3]
 

XI. Conclusion

Hopefully, the 10 JavaScript tricks shared above will provide useful tips for your daily work. Applying them flexibly will lead to cleaner, more maintainable, and faster code. Don't hesitate to experiment with and incorporate these techniques into your future projects to truly feel the difference they can make.
Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let's build something great together-reach out to us today. Or click here to explore more ISB Vietnam's case studies.
View More
TECH

November 28, 2025

Managing Secrets in Spring Boot: A Guide to Secret Manager Integration

Have you ever accidentally committed an application.properties file containing a database password, API key, or other sensitive information to Git? This is one of the most common and dangerous security risks developers face.

"Hardcoding" (writing these sensitive values directly) in your code or configuration files is a bad practice. Fortunately, Spring Boot provides an excellent mechanism to integrate with Secret Manager systems, helping you store and retrieve this information securely.

What is a Secret Manager?

A Secret Manager is a specialized service (usually provided by cloud platforms) designed to store, manage, and control access to sensitive information like passwords, API keys, certificates, and other secrets used by applications. Instead of hardcoding these values in your code or configuration files, Secret Manager allows you to keep them in a secure, centralized place.

We will start with AWS Secret Manager to store the username and password for the database.

Step 1: Create a Secret on AWS

The best way to store multiple related values (like a database username and password) is to use the JSON format.

  • Access the AWS Management Console > Secrets Manager.
  • Click on "Store a new secret".
  • In the "Key/value pairs" section, enter your values as JSON. This is a very effective method.
  • Example: to store database information, you can configure it as follows:

JSON

{
    "spring_datasource_username": "db_admin_user",
    "spring_datasource_password": "MyS3cur3Password_From_AWS!"
}

Give the secret a name (e.g., production/database/credentials).
Tip: Use a hierarchical prefix (like production/ or my-app/) for easy management and permissions.

  • Click Next > Next and finally "Store".

Step 2: Configure Permissions (IAM)

Your Spring Boot application needs permission to read this secret.

On Production (EC2, ECS, EKS): You should create an IAM Role (e.g., MyWebAppRole), assign this Role to your service (e.g., assign it to the EC2 instance). Then, create a Policy to grant permissions to that Role.

JSON

{
     "Version": "2012-10-17",
    "Statement": [
        {
           "Effect": "Allow",
           "Action": "secretsmanager:GetSecretValue",
           "Resource": "arn:aws:secretsmanager:<region>:<account-id>:secret:<secret-name-prefix>*"
        }
    ]
}

Step 3: Update pom.xml (Spring Boot 3.x)

Add the Spring Cloud AWS BOM (Bill of Materials) to <dependencyManagement>:

XML

<dependency>
    <groupId>io.awspring.cloud</groupId>
    <artifactId>spring-cloud-aws-starter-secrets-manager</artifactId>
</dependency>

Step 4: Configure Spring Boot to "Import" Secrets

Properties

spring.config.import=aws-secretsmanager:demo?prefix=db
spring.datasource.writer.url=jdbc:${db.engine}://${db.host}:${db.port}/${db.dbname}
spring.datasource.writer.username=${db.username}
spring.datasource.writer.password=${db.password}
spring.datasource.writer.driver-class-name=com.mysql.cj.jdbc.Driver

 

Conclusion

You have successfully configured your Spring Boot application to dynamically read secrets from AWS Secrets Manager. This method is not only more secure but also much more flexible. When you need to rotate a password, you just need to update the value in AWS Secrets Manager; your application will automatically pick up the new value on its next startup without needing to rebuild or redeploy the code.

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

[References]

View More
TECH

November 28, 2025

AOP Deep Dive: Separating Concerns.

As Object-Oriented Programming (OOP) developers, we are trained to build systems by encapsulating data and behavior into Objects. But there's a problem OOP doesn't solve cleanly.

Imagine you have 100 different service methods. In almost every single one, you find yourself writing the same boilerplate code:

  1. Log "Starting method..."
  2. Check user permissions (Security Check)
  3. Begin a Database Transaction
  4. try...catch block
  5. ...Execute the core business logic...
  6. finally block to Commit or Rollback the Transaction
  7. Log "Finished method..."

Your real business logic (step 5) is just a few lines, but it's "drowned" in a sea of repetitive boilerplate. This repetitive code is known as Cross-Cutting Concerns.

This is precisely the problem Aspect-Oriented Programming (AOP) was designed to solve.

What is AOP?

AOP is a programming paradigm that allows us to separate (or modularize) cross-cutting concerns from our main business logic.

Think of it this way:

  • OOP helps you slice your system vertically into classes and modules (like UserService, OrderService).
  • AOP helps you slice your system horizontally by grouping a single concern (like Logging) that cuts across all your modules.

By doing this, your UserService can focus only on user logic. It doesn't need to "know" that it's being logged, timed, or wrapped in a transaction.

The Core Terminology (The "Jargon" You Must Know)

To understand AOP, you must be fluent in its five core terms:

1. Aspect

  • What it is: The module or class that encapsulates a cross-cutting concern.
  • Example: A LoggingAspect, a TransactionManagementAspect, or a SecurityAspect.

2. Join Point

  • What it is: A specific point during the execution of your program where you could intervene.
  • Example: Before a method is called, after a method returns, or when an exception is thrown.

3. Pointcut

  • What it is: An expression or rule that selects one or more Join Points you want to target.
  • Example: "All public methods in the UserService class," or "All methods annotated with @Transactional."

4. Advice

  • What it is: This is the actual logic (the code) you want to execute at the Join Points selected by your Pointcut.

Common types of Advice:

  • @Before: Runs before the Join Point. (e.g., Log "Starting...").
  • @AfterReturning: Runs after the Join Point executes and returns successfully. (e.g., Log the result).
  • @AfterThrowing: Runs if the Join Point throws an exception. (e.g., Log the error, send an alert).
  • @After: Runs after the Join Point finishes, regardless of whether it was successful or threw an error (like a finally block).
  • @Around: The most powerful. It wraps the Join Point. You can run code before, decide whether to execute the Join Point at all, run code after, and even modify the return value. This is used for transactions, caching, and timing.

5. Weaving

  • What it is: The process where the AOP framework applies (or "weaves") the Advice to the targeted Pointcuts.

Simple Summary: You define an Advice (the 'what'—the logic). You use a Pointcut (the 'where'—the rule). You wrap them in an Aspect (the 'container'). The AOP framework performs Weaving (the 'how'—the application) to make it all work.

Practical Example: From "Messy" to "Clean"

Let's see how AOP cleans up our code.

1. Before AOP (The "Polluted" Code)

// Business logic is heavily "polluted" by logging and transaction code

public class OrderService {

    public void createOrder(Order order) {

        DatabaseConnection conn = null;

        Logger.log("Starting to create order for user: " + order.getUserId());

        try {

            conn = Database.getConnection();

            conn.beginTransaction(); // <-- Cross-Cutting Concern 1

 

            // ... Core business logic ...

            // (Check inventory, charge credit card, create order record...)

            // ...

 

            conn.commit(); // <-- Cross-Cutting Concern 1

            Logger.log("Order created successfully."); // <-- Cross-Cutting Concern 2

 

        } catch (Exception e) {

            if (conn != null) {

                conn.rollback(); // <-- Cross-Cutting Concern 1

            }

            Logger.error("Failed to create order: " + e.getMessage()); // <-- Cross-Cutting Concern 2

            throw new OrderCreationException("Error...", e);

        } finally {

            if (conn != null) {

                conn.close();

            }

        }

    }

}

The core logic (the // ... part) is buried.

2. After AOP (The "Clean" Code)

(This example uses syntax from Spring AOP / AspectJ, one of the most popular AOP implementations).

Step 1: The Business Logic (CLEAN):

// Just the business logic. Clean, readable, and easy to unit test.

public class OrderService {

 

    @Transactional // Declares "I need a transaction"

    @Loggable // Declares "I need to be logged"

    public void createOrder(Order order) {

        // ... Core business logic ...

        // (Check inventory, charge credit card, create order record...)

        // ...

 

        // No try-catch-finally, no logging. It's gone.

    }

}

Step 2: Define the Aspects (Where the logic went):

// Aspect 1: Handles Transactions

@Aspect

@Component

public class TransactionAspect {

 

    // An @Around Advice, wrapping any method marked with @Transactional

    @Around("@annotation(com.myapp.annotations.Transactional)")

    public Object manageTransaction(ProceedingJoinPoint joinPoint) throws Throwable {

        DatabaseConnection conn = null;

        try {

            conn = Database.getConnection();

            conn.beginTransaction();

 

            Object result = joinPoint.proceed(); // <-- This executes the real createOrder()

 

            conn.commit();

            return result;

        } catch (Exception e) {

            if (conn != null) conn.rollback();

            throw e; // Re-throw the exception

        } finally {

            if (conn != null) conn.close();

        }

    }

}

 

// Aspect 2: Handles Logging

@Aspect

@Component

public class LoggingAspect {

 

    @Around("@annotation(com.myapp.annotations.Loggable)")

    public Object logExecution(ProceedingJoinPoint joinPoint) throws Throwable {

        String methodName = joinPoint.getSignature().getName();

        Logger.log("Executing: " + methodName);

        long startTime = System.currentTimeMillis();

 

        try {

            Object result = joinPoint.proceed(); // <-- This executes the real createOrder()

 

            long timeTaken = System.currentTimeMillis() - startTime;

            Logger.log("Finished: " + methodName + ". Time taken: " + timeTaken + "ms");

            return result;

        } catch (Exception e) {

            Logger.error("Exception in: " + methodName + ". Error: " + e.getMessage());

            throw e;

        }

    }

}

How Does AOP Actually Work? (The "Magic" of Weaving)

This "magic" typically happens in one of two ways:

  1. Runtime Weaving:
    • The most common method, used by Spring AOP.
    • The framework does not modify your code. Instead, at runtime, it creates a Proxy Object that wraps your real object.
    • When you call orderService.createOrder(), you're actually calling orderServiceProxy.createOrder().
    • This proxy object executes the Aspect logic (@Around), and inside that, it calls your real object (joinPoint.proceed()).
  2. Compile-time Weaving:
    • Used by the full AspectJ framework.
    • This method modifies your compiled bytecode (the .class files) during or after compilation.
    • It literally "weaves" the advice code directly into your target methods.
    • Pro: Faster runtime performance (no proxy overhead).
    • Con: More complex build setup.

The Pros and Cons

Pros

  • Clean Code: Your business logic is pure and focused.
  • High Reusability: Write one LoggingAspect and apply it to 1,000 methods.
  • Easy Maintenance: Need to change your logging format? You only edit one file (the Aspect), not 1,000.
  • Enforces SRP: Your OrderService is now only responsible for one thing: managing orders.

Cons

  • "Black Magic": Code seems to execute "from nowhere." A new developer looking at createOrder() won't see how transactions or logs are handled immediately.
  • Debugging Hell: When an error occurs, the stack trace can be extremely long and confusing, filled with calls to proxies and aspect logic.
  • Performance Overhead: Creating and calling through proxies adds a small performance cost compared to a direct method call.

Conclusion

AOP is not a replacement for OOP. It is a powerful complement to OOP. It's the "secret sauce" behind many features in major frameworks like Spring (for @Transactional, @Secured) and Quarkus (for caching, metrics).

Use AOP wisely for true cross-cutting concerns. Don't abuse it, and make sure your team understands the "magic" that's happening behind the scenes.

 

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

 


References:

View More
TECH

November 28, 2025

A Simple Guide to PCA Image Classification with Scikit-learn: Green vs Red Apples

In PCA image classification tasks, the input data is often high-dimensional, containing a large number of features. For example, a 64×64 RGB image consists of 3 color channels, resulting in 64 × 64 × 3 = 12,288 numerical pixel values. When we feed all 12,288 raw pixel values directly into traditional machine learning models, several issues typically arise:

View More
1 4 5 6 7 8 23
Let's explore a Partnership Opportunity

CONTACT US



At ISB Vietnam, we are always open to exploring new partnership opportunities.

If you're seeking a reliable, long-term partner who values collaboration and shared growth, we'd be happy to connect and discuss how we can work together.

Add the attachment *Up to 10MB