...

What We Think

Blog

Keep up with the latest in technological advancements and business strategies, with thought leadership articles contributed by our staff.
TECH

December 2, 2025

SaaS, PaaS and IaaS — Understand the Models, Choose the Right One

As companies move toward cloud-based solutions, one question comes up again and again: Should we use SaaS, PaaS, IaaS, or stay On-Premises?

The answer depends on your team, your budget, your security needs, and how fast you need to deliver. Let’s break down each model in simple terms.

1. SaaS (Software as a Service)

You use a ready-made product hosted by someone else.
You do not install servers, manage infrastructure, or worry about upgrades.
You simply subscribe and use the app.
Examples: Gmail, Zoom, Slack, Salesforce

Pros

  • No IT setup required
  • Works immediately
  • Automatic updates and maintenance
  • Lower upfront costs

Cons

  • Limited customization
  • Vendor lock-in
  • Cost increases with number of users

Use SaaS when:

You need a solution that “just works” and don’t want to manage infrastructure.

2. PaaS (Platform as a Service)

You build your application, and the platform handles the environment.
PaaS gives you everything you need to develop: runtime, frameworks, databases, deployment tools, scaling, CI/CD.
You focus on coding.
The platform takes care of servers and OS.
Examples: Heroku, Google App Engine, Azure App Service

Pros

  • No IT setup required
  • No server or OS management
  • Strong automation and CI/CD support
  • Great for prototypes and MVPs

Cons

  • Limited control over underlying infrastructure
  • Can be expensive as the system grows
  • Locked to the platform ecosystem

Use PaaS when:

You want to deliver features quickly and don’t want to spend time on DevOps or server management.

3. IaaS (Infrastructure as a Service)

You rent cloud infrastructure—servers, storage, networking—and configure everything yourself.
Think of it as a virtual data center.
You choose CPU, memory, connectivity, OS, and deploy however you like.
Examples: AWS EC2, Azure Virtual Machines, Google Compute Engine

Pros

  • High flexibility
  • Supports any tech stack
  • Scales instantly
  • You control OS and application layer

Cons

  • Requires DevOps/cloud skills
  • Configuration takes time
  • Mismanagement can burn money fast

Use SaaS when:

You want control similar to On-Prem but don’t want to buy physical hardware.

Which one should you choose?

There is no universal “best” option. There is only what fits your needs.
  • SaaS: choose when you want convenience and minimal technical work.
  • PaaS: choose when you’re building an app and want to move fast.
  • IaaS: choose when you need flexibility and control.

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

View More
TECH

December 2, 2025

11 Common Mistakes Experienced Java Developers Make in Spring Boot

As an experienced developer proficient in Object-Oriented Programming (OOP), you understand the foundational principles of encapsulation, inheritance, polymorphism, and abstraction. Spring is, at its core, the most ambitious application of OOP principles ever built at scale.

You know how to designing clean, decoupled classes, apply SOLID principles and managing object lifecycles.

However, transitioning that expertise to the Java Spring Framework and Spring Boot environment often presents a unique set of pitfalls. Spring introduces its own powerful paradigms, primarily Inversion of Control (IoC) and Dependency Injection (DI), which fundamentally alter how objects are created, managed, and interact. The "Spring Magic" of auto-configuration, annotations, and conventions can sometimes lead to shortcuts that violate core OOP tenets or ignore the framework's best practices.

This blog post explores the most common mistakes developers, particularly those with a strong OOP background, make when diving into the Spring ecosystem. We will cover areas from misusing annotations to neglecting performance and security, providing actionable advice to write cleaner, more maintainable, and robust Spring applications.

The most frequent source of errors stems from a conceptual clash between manual OOP object creation and Spring's automated lifecycle management.

Mistake 1: Ignoring Dependency Injection (DI) and Manually Instantiating Objects

A fundamental OOP habit is using the new keyword to create an object when you need it. In Spring, this is a major anti-pattern for framework-managed components.

The Mistake:

Instead of letting Spring inject dependencies into a component developers might manually instantiate the service:

@Service
public class OrderService {
    private final PaymentService paymentService = new PaymentService(); // ← Death sentence
}

You’ve been trained for years to create instance from class. So you do it instinctively. The result?

  • The paymentService is not a Spring bean → no transaction management, no AOP, no proxying.
  • You cannot mock it in unit tests without PowerMockito or ugly workarounds.
  • You cannot swap implementations per profile (e.g., MockPaymentService in tests or SandboxPaymentService in staging).

The Fix:

Always rely on Spring's DI mechanism. Use Constructor Injection (the preferred method), Setter Injection, or Field Injection (using @Autowired).

Constructor Injection 1

@Service
public class OrderService {

    private final PaymentService paymentService;

    public OrderService(PaymentService paymentService) {
        this.paymentService = paymentService;
    }
}

Constructor Injection 2

@Service
@RequiredArgsConstructor // Lombok, or write constructor yourself
public class OrderService {
    private final PaymentService paymentService; // Constructor injection FTW
}

Setter Injection

@Service
public class OrderService {

private PaymentService paymentService;

    @Autowired
    public void setPaymentService(PaymentService paymentService) {
        this.paymentService = paymentService;
    }
}

Field Injection 


@Service

public class OrderService {

@Autowired
private PaymentService paymentService;

    public void placeOrder() {
        paymentService.processPayment();
    }
}

Misstake 2: Using @Autowired Without Qualifiers When Multiple Beans Exist

The mistake:

@Autowired
private PaymentService paymentService; // NoUniqueBeanDefinitionException at startup

Or the even worse variant:

@Autowired
private List<PaymentService> paymentServices; // You get all implementations in unknown order

Suddenly adding a new payment provider (say, Zalo Pay, Apple Pay) breaks production because the list order changed or the wrong one was injected.

The Fix (choose one):

  • Primary bean + @Primary
  • @Qualifier("stripePaymentService")
  • Better: Strategy pattern with a Map<String, PaymentProvider> injected and qualified by name
  • Best: Small focused interfaces instead of one fat PaymentService interface

Mistake 3: Misusing @Configuration and @Bean

Developers often struggle with when and how to define a "bean" outside of the standard component scanning.

The Mistake:

Defining a configuration method as @Bean within a class that isn't annotated with @Configuration (or an equivalent like @SpringBootApplication). Furthermore, they might accidentally create multiple instances of a singleton bean when it should be managed by the container.

The Fix:

A method annotated with @Bean is designed to be executed by a class annotated with @Configuration. This combination tells Spring, "When the application starts, run this method and register its return value as a singleton object (a bean) in the IoC container." Understand that most beans default to the singleton scope, aligning with the OOP practice of having a single point of control for certain resources.

Mistake 4: Logic Overload in Controllers

This is arguably the most common mistake that violates the Single Responsibility Principle (SRP).

The Mistake:

Developers, in an effort to speed up development, place business logic, complex data validation, or even direct database access logic inside the @RestController methods.

// Bad Practice: Controller doing too much
@RestController
public class UserProfileController {
    @Autowired private UserRepository repository;

    @PostMapping("/users")
    public ResponseEntity<User> createUser(@RequestBody User user) {
    // Business logic/validation here instead of Service layer
       if (user.getAge() < 18) {
            throw new InvalidUserException();
        }
      // Direct repository call
        repository.save(user);
      // ...
     }
}

The Fix:

Enforce strict layering.

  • Controller (@RestController): Only handles HTTP request/response mapping, request validation (e.g., using JSR-303 annotations like @Valid), and calling the appropriate Service layer method. It acts as the "gatekeeper."

  • Service (@Service): Holds all the business logic, transaction boundaries (@Transactional), and orchestrates calls to multiple Repositories. It's the "brain" of the application.

  • Repository (@Repository): Only handles direct data access operations (CRUD) against the persistence store. @Repository enables exception translation from SQLException → DataAccessException hierarchy. If you slap @Component on your JPA repositories, you lose that translation and end up with raw, unchecked exceptions bubbling up. 

Mistake 5: Missing or Misplaced Transactional Boundaries

The Mistake:

Two opposite extremes I see constantly:

A. Forgetting @Transactional entirely → no rollback on exceptions B. Putting @Transactional on every service method → huge transactions, table locks, deadlocks

Worse: self-invocation bypasses the proxy.

@Service
public class OrderService {
    public void createOrder() {
        validateOrder(); saveOrder();
        // No transaction here because of self-call!
    }
    @Transactional
    public void saveOrder() {
       ...
    }
}

The Fix:

  • Put @Transactional only on the public method that orchestrates the use case
  • Never call @Transactional methods from within the same class
  • If you must, extract to another Spring bean or use @Transactional(propagation = REQUIRES_NEW) carefully
  • Always set readOnly = true for query methods
  • Use @Transactional on class level only if literally every method needs it (rare)

Mistake 6: Not Understanding N+1 Queries and Fetch Strategies

The Mistake:

In MyBatis, the N+1 problem commonly happens when a developer defines a parent-child mapping where the child collection is loaded using another SQL query inside the <collection> tag.

<resultMap id="userResultMap" type="User">
    <id property="id" column="id"/>
    <result property="name" column="name"/>

    <!-- Causes N+1 queries -->
    <collection property="orders"
        ofType="Order"
        select="findOrdersByUserId"
        column="id"/>
</resultMap>

If you call

List<User> users = userMapper.findAllUsers();

MyBatis will execute: 1 query to retrieve all users + N queries to retrieve orders for each user

1 (parents) + N (children) = N+1 queries

The fix:

The most efficient way to avoid the N+1 problem in MyBatis is to replace nested selects with a single JOIN query, and let MyBatis map the flattened results.

<select id="findAllUsersWithOrders" resultMap="userOrderResultMap">
    SELECT u.id AS user_id,
        u.name,
        o.id AS order_id,
        o.product_name
FROM users u
LEFT JOIN orders o ON u.id = o.user_id


</select>

    <resultMap id="userOrderResultMap" type="User">
        <id property="id" column="user_id"/>
        <result property="name" column="name"/>
        <collection property="orders" ofType="Order">
        <id property="id" column="order_id"/>
        <result property="productName" column="product_name"/>
         </collection>
</resultMap>

  • Only one SQL call is executed.

  • MyBatis maps the parent-child relationships in memory.

  • No performance penalties from multiple round-trips to the database.

Mistake 7: Storing Sensitive Data in application.properties / application.yml

This is a critical security vulnerability.

The Mistake:

Hardcoding sensitive information such as database passwords, API keys, or cloud credentials directly into the application's configuration files. This is easily exposed if the source code is compromised or accidentally checked into a public repository.

The Fix:

Use Environment Variables or Externalized Configuration. Spring Boot is designed to read configuration properties from multiple sources, with environment variables taking precedence.

# Instead of:
# spring.datasource.password=myhardcodedsecret

# Use this in application.properties/yml:
spring.datasource.password=${DB_PASSWORD} 

Then, set the DB_PASSWORD environment variable on the server. For production, consider dedicated tools like Spring Cloud Config or secrets managers (e.g., AWS Secrets Manager, HashiCorp Vault).

Mistake 8: Leaking Entity Objects Out of the Service Layer

This violates encapsulation and can lead to unintended state changes.

The Mistake:

Returning the raw JPA @Entity objects directly  (or mybatis) from a Service layer method to the Controller, and then exposing them as the JSON response. This breaks the domain boundary. Furthermore, lazy-loaded collections on the Entity might be accessed outside of the transaction scope (e.g., in the Controller or during JSON serialization), leading to a dreaded LazyInitializationException.

The Fix:

Implement the Data Transfer Object (DTO) pattern. The Service layer should map the internal @Entity objects to an external DTO before returning it. The Controller only works with DTOs. This ensures encapsulation (internal data structure is protected) and prevents serialization errors.

Even though you don't have JPA’s LazyInitializationException or persistence-context issues, you still get:

Problem JPA MyBatis
LazyInitializationException ✔ Yes ❌ No
Dirty checking / automatic persistence ✔ Yes ❌ No
Leaking domain model ✔ Yes ✔ Yes
Coupling DB schema to API ✔ Yes ✔ Yes
Exposing sensitive/internal fields ✔ Yes ✔ Yes
Serialization recursion issues ✔ Yes ✔ Yes

The DTO pattern is still best practice for MyBatis as well as JPA

Mistake 9: Misusing @Async and Thread Pools

The Mistake:

Simply adding @EnableAsync and annotating a method with @Async without configuring a thread pool.
// Bad Practice: Relying on default behavior
@Service public class EmailService{      @Async public void sendEmail(String recipient){ // Expensive IO operation     }  }
The Fix:

Until you realize:

  • The Configuration (Avoiding OOM)

@Configuration
@EnableAsync
public class AsyncConfig {

    @Bean(name = "taskExecutor")
    public Executor taskExecutor() {
    // Fix: Default SimpleAsyncTaskExecutor creates a new thread per call → OOM in production.
    // We configure a proper ThreadPoolTaskExecutor to limit resource usage.
    ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();

        executor.setCorePoolSize(10);
        executor.setMaxPoolSize(50);
        executor.setQueueCapacity(500);
        executor.setThreadNamePrefix("AsyncThread-");
        executor.initialize();
        return executor;
    }
}

  • The Service (Handling Proxy & Exceptions)

@Service
public class EmailService {

    private static final Logger logger = LoggerFactory.getLogger(EmailService.class);

    // Rule 1: @Async methods must be public (for AOP proxying) and called from another bean.
    @Async("taskExecutor")
   public CompletableFuture<String> sendEmail(String recipient) {
         // Rule 2 & 3: Always return CompletableFuture<?>.
        // If we return 'void', exceptions are swallowed and lost.
        try {
             logger.info("Sending email to " + recipient);
             Thread.sleep(2000);

             if (recipient.contains("fail")) {
                   throw new RuntimeException("Email server timeout!");
            }

            return CompletableFuture.completedFuture("Email Sent Successfully");

       } catch (Exception e) {
           // Properly capture the error so the caller can handle it.
           // Otherwise, the exception is swallowed by the void return type.
           return CompletableFuture.failedFuture(e);
        }
   }
}

  • The Usage (Calling from another Bean)

@RestController
public class EmailController {

    @Autowired
    private EmailService emailService; // Injecting the bean (External call)

    @PostMapping("/send")
    public void send(@RequestParam String email) {
        // This is a valid call because it comes from the Controller bean to the Service bean.
       // If we called this.sendEmail() inside EmailService, the @Async would be ignored.
       emailService.sendEmail(email);
    }
}

Mistake 10: Inadequate or Generic Exception Handling

Poor exception handling leads to cryptic HTTP 500 errors and a terrible user experience.

The Mistake:

Catching generic exceptions (catch (Exception e)) in the service layer, suppressing specific exceptions, or allowing application exceptions to bubble up to the client, exposing internal implementation details.

The Fix:

Implement proper Global Exception Handling using the @ControllerAdvice and @ExceptionHandler annotations. This allows you to centralize error handling, map your custom, specific application exceptions (e.g., ResourceNotFoundException, InvalidInputException) to the correct HTTP status codes (e.g., 404, 400), and return a standardized, clean error response object (e.g., JSON payload).

Mistake 11: Ignoring Spring Security Basics

Security is often an afterthought, and developers fail to understand how Spring Security integrates with the application context.

The Mistake:

Not understanding the basics of the Spring Security Filter Chain, relying solely on annotations like @PreAuthorize without a configured authentication provider, or storing passwords in plaintext.

The Fix:

Every Spring Boot application should implement security from the start. Use a modern, strong password encoder (like BCryptPasswordEncoder), configure a custom UserDetailsService, and ensure you understand the flow: Request -> Filter Chain -> Authentication -> Authorization -> Dispatcher Servlet -> Controller. Even for non-critical endpoints, you should explicitly define them as public (e.g., a whitelist) and secure everything else by default.

Conclusion

The Java Spring Framework and Spring Boot offer immense power, allowing you to build scalable and robust applications rapidly. For developers with a strong OOP background, mastery lies in shifting control from manual object management to embracing Spring's core mechanisms: IoC, DI, and AOP.

To transition successfully and write clean, professional Spring code, focus on these non-negotiable best practices:

  • Stop Using new: Fully rely on Constructor Injection for all managed components, letting Spring manage object lifecycle and provide essential features (transactions, security, AOP).
  • Maintain Layer Discipline: Enforce strict SRP by keeping Business Logic strictly in the Service Layer and ensuring Controllers only handle request mapping.
  • Protect Your Domain: Never leak Entities; always use the DTO pattern for communication between the Service and Controller layers to maintain encapsulation and prevent persistence context issues.
  • Secure and Optimize: Prioritize security by externalizing sensitive secrets and optimize performance by resolving N+1 query issues using JOIN FETCH or appropriate mapping strategies.
  • Handle Errors Globally: Implement Global Exception Handling (@ControllerAdvice) to provide clean, standardized error responses and avoid exposing internal stack traces.

Mastering Spring is about understanding where the framework takes over. By embracing these idioms, you transform from a developer who uses Spring to one who thinks in Spring, resulting in applications that are cleaner, more maintainable, and built for scale.

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

Reference

[1] Spring Framework course. Top 10 Mistakes in Spring Boot Microservices and How to Avoid Them . Ramesh Fadatare . https://www.javaguides.net/2025/01/top-10-mistakes-in-spring-boot-microservices.html

[2] Spring Framework Documentation. (n.d.). 3.3. Dependency Injection. Retrieved November 30, 2025, from https://docs.spring.io/spring-framework/reference/core/beans/dependencies/factory-collaborators.html

[3] Mihalcea, V. (2023, August 11). The best way to use DTOs with JPA and Hibernate. Vlad Mihalcea. https://vladmihalcea.com/the-best-way-to-use-dtos-with-jpa-and-hibernate/

[4] Baeldung Team. (2024, March 25). The @Transactional Pitfall. Baeldung. https://www.baeldung.com/transactional-spring-proxies

[5] Top 10 Most Common Spring Framework Mistakes. Retrieved November 30, 2025, from https://www.geeksforgeeks.org/java/top-10-most-common-spring-framework-mistakes/

[6] Support from Chat GPT and Gemini

View More
TECH

December 2, 2025

Improving Application Performance with Laravel Queues

In modern web applications, performance and user experience are very important. Users always expect immediate responses to every action. However, some tasks such as sending emails, processing images or generating reports take a long time. If we perform them right in the process of responding to requests, the application will become slow.
The Queue is the solution to solve this problem. In this article, we will know what Laravel Queues is, why we should use it, and the simplest way to implement it.
 

 What is Queue?

Queues allow us to push time-consuming tasks to the background, instead of executing them immediately when a user sends a request.
Some examples of using Queues:
    •  Sending an email.
    •  Processing and resizing images when users upload.
    •  Sending notifications to multiple people.
    •  Generating PDF reports.
    •  Processing payments.

Why should we use Queues?

For example, when a user registers for an account, we want to send a welcome email.
Without Queues:
    1. Save user information.
    2. Create email content.
    3. Send an email.
    4. Return results to the browser.
If sending an email takes 3-5 seconds, users will have to wait a long time.
Using Queues:
    1. Save user information.
    2. Push the email request to the queue.
    3. Return results immediately to the user.
    4. Send an email in the background.
Here is much more effective!
 

How to implement Laravel Queues simply

Step 1: Configure Queues Driver

Laravel supports many types, such as database, Redis, Amazon SQS, etc
Here we use the database driver for the example.
In the .env file, edit the following line:
 
QUEUE_CONNECTION=database
 
Create the database table to hold the jobs by running the migration:
 
php artisan queue:table
php artisan migrate
 

Step 2: Create a Job

Run the command:
 
php artisan make:job SendWelcomeEmail
 
This will generate a class in app/Jobs/SendWelcomeEmail.php
In the file app/Jobs/SendWelcomeEmail.php, we add the email sending logic:
 
public function handle()
{
    Mail::to($this->user->email)->send(new WelcomeEmail($this->user));
}
 

Step 3: Dispatching Jobs

In Controller, when the user registers, we use the dispatch on the job itself. The arguments passed to the method will be set to the job's constructor
 
SendWelcomeEmail::dispatch($user);
 
At this time, the email sending job has been queued.
 

 Step 4: Run the Queue Worker

To start a queue worker and process jobs, we need to run the command below
 
php artisan queue:work
 
Worker is a long-lived process, so during deployment, it will not automatically recognize and adapt to changes in source code, so we need to restart it during deployment, with the command:
 
php artisan queue:restart
 
To keep the queue process running permanently in the background, we need to use a process monitor such as Supervisor in production to keep workers running.
 

Conclusion

The Queue is a powerful tool that helps us improve application performance and provide a smooth user experience. Time-consuming tasks such as sending emails, processing images, generating reports, etc., should be queued to run in the background.
If you are building a Laravel application and have not yet used Queue, you should try to implement it in your application. This will not only make your application run faster but also make it easier to extend and manage later.

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

[References]

  1. https://laravel.com/docs/12.x/queues
  2. https://viblo.asia/p/gioi-thieu-ve-queues-trong-laravel-gAm5yqgD5db
  3. https://www.luckymedia.dev/blog/laravel-for-beginners-using-queues (Image source)
View More
TECH

December 2, 2025

WOVN.io: A Simple Way to Localize Your Website Without Code Changes

In today’s global web ecosystem, users expect content in their own language.
Whether you’re building a SaaS platform, e-commerce store, or admin dashboard, adding multilingual support can quickly become a major challenge — involving i18n libraries, translation files, and ongoing maintenance.

WOVN.io solves this problem with a lightweight and automated approach: it localizes your website or web app in minutes, without changing your existing code.

What is WOVN.io?

WOVN.io is a SaaS localization service that automatically translates your website or web app into multiple languages.
It detects the visitor’s browser language and displays the appropriate translation — all managed from a simple web dashboard.

You can use machine translations, human-reviewed content, or a combination of both. It’s designed to work seamlessly with any framework, including single-page applications (SPA) like React, Vue, or Next.js.

Key Benefits

  • Quick setup – integrate with just one line of script.

  • No refactoring required – works with your existing HTML and routing.

  • Centralized translation management – manage translations from WOVN’s dashboard.

  • SEO-friendly multilingual URLs – automatically generates /en, /ja, /fr, etc.

  • Automatic & manual translation options – combine AI speed with human accuracy.

Installation & Integration

1. Sign up for WOVN.io

Go to https://wovn.io and create an account.
Once you set up a project, WOVN will generate a unique script snippet for you, for example:

<script src="//j.wovn.io/1" data-wovnio="key=your_project_key"></script>

2. Add the script to your project’s index.html file

Open your project’s public/index.html file and insert the script inside the <head> tag:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>My App</title>
    <!-- WOVN.io script -->
    <script src="//j.wovn.io/1" data-wovnio="key=your_project_key"></script>
  </head>
  <body>
    <div id="root"></div>
  </body>
</html>

3. Run and test

Start your project as usual:

yarn start

WOVN will automatically detect the user’s browser language and load the translated content.
You’ll also see a language switcher appear (added automatically by WOVN).

Why Use WOVN.io?

  • Instant multilingual setup – go global in minutes.

  • Improved international SEO – optimized localized URLs for each language.

  • Simplified maintenance – manage translations outside your codebase.

  • Scalable localization – perfect for SaaS, marketing sites, and e-commerce.

  • Supports both static and dynamic content through WOVN’s API.

Conclusion

If you’re expanding your product globally and need a fast, no-hassle way to localize your website, WOVN.io is the perfect solution.
With just one line of code, your app can speak to the world — literally.

If you're seeking a reliable, long-term partner who values collaboration and shared growth, feel free to reach out to us here: Contact ISB Vietnam

[References]

View More
TECH

December 2, 2025

How to create a Calendar event in Gmail

As a programmer, we are probably too familiar with handling sending emails for our application.
How you can send a meeting invite from your own application that works perfectly with Google Calendar, Outlook, and Apple Calendar?
Today I will introduce about sending emails to invite to events and that event will be added to your calendar. We'll break down what iCalendar and .ics files are, how they work, and how you can use Node.js to send your own calendar invitations that show up with those professional "Yes / No / Maybe" buttons.

I. What is iCalendar?

iCalendar is a standard format for sharing calendar information between different applications. You may have used Google Calendar, Outlook, or Apple Calendar before — iCalendar is what helps these apps “speak” to each other when sharing event details. The format is widely used because it follows a consistent structure that all calendar software can understand.
The iCalendar format makes it easy to share event data like meetings, appointments, birthdays, or even reminders. For example, when you receive an event invitation in your email and click “Add to Calendar,” that’s usually an iCalendar file at work. This file contains all the event details such as date, time, title, and location, allowing your calendar app to automatically add the event to your schedule. The iCalendar format uses a .ics file extension — short for “iCalendar file.” It is human-readable and simple to create or modify, making it very useful for developers who want to add calendar features to their applications.

II. What is an ICS File?

An ICS file is a plain text file that follows the iCalendar format. It contains all the information needed for calendar events. You can open it with a text editor to see its content or import it into any calendar app.
When working with an ICS file, there are several key components you’ll usually find:
  • BEGIN:VCALENDAR and END:VCALENDAR: These mark the start and end of the calendar file.
  • VERSION: This indicates the version of iCalendar being used (commonly “2.0”).
  • PRODID: Identifies the application that created the ICS file.
  • BEGIN:VEVENT and END:VEVENT: These mark the start and end of an event block.
  • UID: A unique identifier for the event. It helps distinguish one event from another.
  • DTSTAMP: The date and time when the event file was created or last modified.
  • DTSTART and DTEND: Define the start and end times of the event.
  • SUMMARY: The title or short description of the event.
  • LOCATION: The place where the event will happen.
  • STATUS: Shows whether the event is confirmed, tentative, or cancelled.
  • DESCRIPTION: More details about the event.
  • ORGANIZER and ATTENDEE: Information about who is organizing the event and who is invited.

    Example of an ICS event:

    BEGIN:VCALENDAR
    VERSION:2.0
    PRODID:-//IVC System//Meeting Scheduler//EN
    CALSCALE:GREGORIAN
    METHOD:REQUEST
    BEGIN:VEVENT
    UID:20251124T183309Z
    DTSTAMP:20251124T113309Z
    DTSTART:20251124T114000Z
    DTEND:20251124T121000Z
    SUMMARY:Team meeting
    DESCRIPTION:Monthly team meeting to discuss progress and goals.
    LOCATION:Meeting Room 2
    SEQUENCE:0
    STATUS:CONFIRMED
    ORGANIZER;CN="admin@example.com":mailto:admin@example.com
    ATTENDEE;PARTSTAT=ACCEPTED;RSVP=FALSE;CN=user1@example.com:mailto:user1@example.com
    END:VEVENT
    END:VCALENDAR

III. Points to note when working with ICS:

Always Use UTC Time: Notice the Z at the end of all the timestamps (e.g., ...T100000Z)? This signifies UTC (Coordinated Universal Time). This is a critical best practice. Don't worry about timezones; just provide the event time in UTC. The user's calendar application (Google, Outlook) will automatically and correctly translate it to their local timezone.
UID is Forever: The UID is the event's permanent ID. If you send 10 emails with 10 different .ics files but they all have the same UID, the calendar app will just update the one event.

IV. How to create a new event

To create a new event, you generate an .ics file with:
  • A brand new, unique UID that has never been used before.
  • A SEQUENCE: 0 property.

V. How to update an Event

This is where UID and SEQUENCE are crucial. Imagine you need to move the meeting from 2:00 PM to 3:00 PM. You would send a new .ics file via a new email, but inside the file, you would:
Use the exact same UID as the original event (12345-abc-67890@my-domain.com).
Increment the SEQUENCE number (e.g., SEQUENCE:1).
Update the changed fields (e.g., DTSTART:20251126T150000Z and DTEND:20251126T160000Z).
Update the DTSTAMP to the current time.
When the user's calendar receives this, it sees the UID, finds the event it already has, and checks the SEQUENCE. Since 1 is greater than 0, it knows this is an update and applies the changes. If you send an update with the same SEQUENCE number, it will be ignored as a duplicate.

VI. Sending ICS Mail with Node.js

Manually writing .ics files is a pain. Luckily, we can use libraries to do the hard work. Here’s how to generate an event and email it as a real invitation (with the "Yes/No/Maybe" buttons) using two popular npm packages: ics and nodemailer.
First, install the libraries:
npm install ics nodemailer
Now, let's write the Node.js script.
constnodemailer=require("nodemailer");

 

consttransporter=nodemailer.createTransport({
    service:"gmail",
    auth: {
        user:"your_email@gmail.com",
        pass:"your_password"
    }
});

 

consticsContent=`
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IVC System//Meeting Scheduler//EN
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
UID:20251124T183309Z
DTSTAMP:20251124T113309Z
DTSTART:20251124T114000Z
DTEND:20251124T121000Z
SUMMARY:Team meeting
DESCRIPTION:Monthly team meeting to discuss progress and goals.
LOCATION:Meeting Room 2
SEQUENCE:0
STATUS:CONFIRMED
ORGANIZER;CN="admin@example.com":mailto:admin@example.com
ATTENDEE;PARTSTAT=ACCEPTED;RSVP=FALSE;CN=user1@example.com:mailto:user1@example.com
END:VEVENT
END:VCALENDAR
`;

 

constmailOptions= {
    from:"your_email@gmail.com",
    to:"recipient@example.com",
    subject:"Meeting Invitation",
    text:"Please find the meeting invitation attached.",
    alternatives: [
        {
            contentType:"text/calendar; charset=\"utf-8\"; method=REQUEST",
            content:icsContent
        }
    ]
};

 

transporter.sendMail(mailOptions, (error, info) => {
    if (error) {
        console.log(error);
    } else {
        console.log("Email sent: "+info.response);
    }
});

VII. Conclusion

Working with ICS files is a simple yet powerful way to integrate calendar functionality into your applications. By understanding the basic structure of iCalendar and how an ICS file works, you can easily create, update, and share events between users. When combined with email tools like Nodemailer in Node.js, sending meeting invitations becomes automatic and seamless.
This feature is especially useful for apps that schedule appointments, meetings, or events — such as booking systems, team collaboration tools, and online learning platforms.
In short, iCalendar and ICS make it possible for different platforms and users to stay connected and organized, regardless of which calendar service they use. With just a bit of code, you can help users save time and never miss an important event again.
Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let's build something great together- reach out to us today. Or click here to explore more ISB Vietnam's case studies
[References]
View More
TECH

December 1, 2025

Integrating Social Login with Amazon Cognito

Social login (via Google, Facebook, Apple, etc.) is one of the easiest ways to boost user adoption and simplify onboarding. Instead of asking users to remember another password, you let them sign in using identities they already trust. Amazon Cognito makes this process straightforward by handling OAuth2 and identity federation behind the scenes — all you need to do is configure it correctly and plug it into your app. This post walks you through setting up Social Login in AWS Cognito, integrating it into your app, and handling the OAuth2 callback + token exchange using Node.js.

What Is Amazon Cognito?

Amazon Cognito has two main components:
  • User Pools – Handle user management (sign up, sign in, MFA, password policies, etc.)
  • Identity Pools (Federated Identities) – Allow access to AWS resources using federated identities from external providers.
When you enable federated sign-in, Cognito links the user’s social identity (from Google, Facebook, etc.) to a Cognito user in your User Pool.

Step 1: Create a Cognito User Pool

  1. Open AWS Console → Cognito → Create user pool
  2. Name it (e.g., myapp-user-pool)
  3. In Sign-in experience, enable Federated sign-in
  4. Add the social providers you want to support

Step 2: Configure a Social Identity Provider (Google Example)

  1. Go to your User Pool → Federation → Identity providers
  2. Select Google
  3. Paste your Client ID and Client Secret from Google Cloud Console
  4. Save changes
Then, under App client integration → App client settings, enable Google as a sign-in provider for your app client.

Step 3: Set Callback and Logout URLs

In App client settings, add:
  • Callback URL(s) → where Cognito redirects users after successful login e.g., https://myapp.com/callback
  • Sign-out URL(s) → where Cognito redirects after logout e.g., https://myapp.com/logout
Click Save changes.

Step 4: Using the Hosted UI

Cognito offers a Hosted UI that handles login and redirects back to your app automatically. Your users visit a URL like this:

https://<your-domain>.auth.<region>.amazoncognito.com/login?client_id=&response_type=code&scope=email+openid+profile&redirect_uri=https://myapp.com/callback

Step 5: Handling the Callback and Exchanging Tokens

Let’s write the backend code that receives that code and exchanges it for tokens (ID token, access token, refresh token). Example: Node.js + Express

1. Install dependencies

npm install express axios qs dotenv

2. Create .env

COGNITO_DOMAIN=https://your-domain.auth.ap-southeast-1.amazoncognito.com
COGNITO_CLIENT_ID=xxxxxxxxxxxxxxx
COGNITO_CLIENT_SECRET=yyyyyyyyyyyyyyyy
COGNITO_REDIRECT_URI=https://myapp.com/callback

3. Create server.js


import express from "express";
import axios from "axios";
import qs from "qs";
import dotenv from "dotenv";
dotenv.config();

const app = express();

app.get("/callback", async (req, res) => {
  const code = req.query.code;
  if (!code) return res.status(400).send("Missing authorization code");

  try {
    const tokenUrl = `${process.env.COGNITO_DOMAIN}/oauth2/token`;

    const data = qs.stringify({
      grant_type: "authorization_code",
      client_id: process.env.COGNITO_CLIENT_ID,
      code,
      redirect_uri: process.env.COGNITO_REDIRECT_URI,
    });

    // Cognito expects Basic Auth header with client_id and client_secret
    const authHeader = Buffer.from(
      `${process.env.COGNITO_CLIENT_ID}:${process.env.COGNITO_CLIENT_SECRET}`
    ).toString("base64");

    const response = await axios.post(tokenUrl, data, {
      headers: {
        "Content-Type": "application/x-www-form-urlencoded",
        Authorization: `Basic ${authHeader}`,
      },
    });

    const { id_token, access_token, refresh_token } = response.data;

    // Decode the JWT ID token (you can also verify signature later)
    const userInfo = JSON.parse(
      Buffer.from(id_token.split(".")[1], "base64").toString("utf-8")
    );

    res.send(`Login successful!`);
  } catch (err) {
    console.error(err);
    res.status(500).send("Token exchange failed");
  }
});

app.listen(3000, () => {
  console.log("Server running on http://localhost:3000");
});

How It Works

  1. Cognito redirects the user back to /callback with a code.
  2. You exchange that code for tokens via the /oauth2/token endpoint.
  3. You decode the id_token (JWT) to extract user info like email, sub, and name.

Conclusion

That’s it — you’ve now wired up Social Login with AWS Cognito, including the full OAuth2 token exchange flow. With Cognito handling the heavy lifting (OAuth, token issuance, identity federation), you can focus on building the actual app instead of managing auth complexity.
Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

 

Cover image from freepik.com

View More
TECH

December 1, 2025

MinIO: Open Source, S3 Compatible Object Storage Solution for Developers

In the modern world of application development, the management and storage of unstructured data such as images, videos, log files, etc. plays an increasingly important role. Amazon S3 (Simple Storage Service) has become a standard for cloud object storage services due to its stability, scalability, and ease of use.

However, when developing and testing applications locally, interacting directly with S3 sometimes causes certain inconveniences in terms of cost, latency, and resource management. That's why MinIO was born — an open-source, high-performance object storage solution designed to be fully compatible with the Amazon S3 API.

What is MinIO?

  • MinIO is an object storage server built on Go. It features the following:
    S3 Compatibility: MinIO implements nearly the entire Amazon S3 API, allowing applications that are already designed to work with S3 to easily integrate with MinIO without significant code changes.
  • High Performance: Optimized for performance, MinIO can achieve impressive read/write speeds, suitable for bandwidth-intensive workloads such as AI/ML, data analytics, and backup.
  • Scalability: MinIO supports a distributed mode architecture, allowing you to easily expand storage capacity and performance by adding new nodes to the cluster.
  • Lightweight and Easy to Deploy: With its compact size and ability to run on multiple platforms (Linux, macOS, Windows) as well as in Docker containers, MinIO is convenient for local development, testing, and deployment across different environments.
  • Open Source: Released under the Apache License v2.0, MinIO offers high flexibility and customizability to users.

Why should programmers use MinIO to emulate S3?

  • Easy local development and testing: You can quickly set up a simulated S3 environment on your PC or development server.
  • Cost savings: Avoid the overhead of constantly interacting with real S3 during development and testing.
  • Full control of the environment: You have complete control over MinIO's data and configuration, making it easy to reproduce failure scenarios and test.
  • Speed ​​up development: Accessing data locally is often faster than connecting to a remote cloud service, speeding up development and testing.

Instructions for installing MinIO using Docker Compose

Step 1: Install Docker and Docker Compose (if you haven't already)
If you haven't installed Docker, visit the official page https://www.docker.com/get-started/ to download and install it for your operating system.

Step 2: Create a docker-compose.yaml file
Create a file named docker-compose.yaml with the following content:

version: "3.8"

services:
    minio:
        image: minio/minio
        volumes:
            - ./minio-data:/data
        ports:
            - "9000:9000" # S3 API port
            - "9001:9001" # MinIO Console interface port
        environment:
            MINIO_ACCESS_KEY: "your_access_key" # Replace with your access key
            MINIO_SECRET_KEY: "your_secret_key" # Replace with your security secret key
        command: server --console-address ":9001" /data
        volumes:
            - ./minio-data:/data # Store data permanently in local directory

Step 3: Launch MinIO
Open a terminal and run the following command in the directory containing docker-compose.yaml:

docker-compose up -d

Step 4: Access the browser using the following link: http://localhost:9001
Use the accesskey and secretkey you set in step 2 to log in.

Step 5: Create an access key and secret key to be able to connect from the web to Minio.
Please save the access key and secret key as they are only visible when you create them.

Step 6: Create a bucket.

After creating the bucket, Minio also allows you to customize the policy, similar to S3.
Select custom and customize according to your needs.

So, you have installed Minio to simulate the S3 environment, now you can perform interactions with Minio similar to operations with S3 to perform development and testing.

Conclusion

MinIO is a powerful and flexible tool for developers who want to emulate a local Amazon S3 environment. Installing and using MinIO with Docker Compose simplifies the setup process while ensuring that data is persistent and easy to maintain. Hopefully, this guide will help you get started exploring and making the most of the features MinIO offers in your software development.

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

[References]

View More
TECH

December 1, 2025

How to Send Email with SendGrid in Spring Boot

In modern web applications, sending email is a fundamental function—from account confirmation emails and password resets to system notifications. However, if you try to send emails via a regular SMTP server, you often encounter problems such as:

  • Emails being marked as spam
  • Difficulty in tracking delivery status
  • Sending rate limits

The solution? SendGrid—a professional, stable, and easy-to-integrate email service. In this article, we'll learn how to integrate SendGrid into a Spring Boot application to send emails quickly and securely.

I. What is SendGrid?

SendGrid (owned by Twilio) is a well-known cloud-based email delivery service that supports:

  • Sending transactional emails
  • Sending marketing emails
  • Managing recipient lists and email templates
  • Tracking open rates, bounce rates, and click tracking

SendGrid offers a free plan that allows sending 100 emails/day, which is perfect for testing or small projects.

II. Create an Account and Get an API Key

  • Go to https://sendgrid.com
  • Click Start for Free to register an account.
  • After verifying your email, log in to the dashboard.
  • Navigate to Settings → API Keys.
  • Click Create API Key.
  • Set a name (e.g., springboot-mail-key) and choose Full Access permissions.
  • Save the API Key (Important: It is only displayed once).

III. Add the Dependency to pom.xml

Add the official SendGrid Java library to your project's pom.xml:

XML

<dependency>
    <groupId>com.sendgrid</groupId>
    <artifactId>sendgrid-java</artifactId>
    <version>4.10.1</version>
</dependency>

 

IV. Configure the API Key

Add your SendGrid API key and default sender email to your application.properties file:

Properties

sendgrid.api.key=SG.xxxxxxxx_your_key_here
sendgrid.sender.email=no-reply@yourdomain.com

 

V. Create the SendGridService to Send Email

Next, create a service class that will handle the email sending logic. This class will read the configuration values using @Value and use the SendGrid library to make the API call.

Java

package com.example.mail.service;

import com.sendgrid.*;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;

import java.io.IOException;

@Service
public class SendGridService {

    @Value("${sendgrid.api.key}")
    private String sendGridApiKey;

    @Value("${sendgrid.sender.email}")
    private String senderEmail;

    public void sendEmail(String to, String subject, String contentHtml) throws IOException {
        Email from = new Email(senderEmail);
        Email recipient = new Email(to);
        Content content = new Content("text/html", contentHtml);
        Mail mail = new Mail(from, subject, recipient, content);

        SendGrid sg = new SendGrid(sendGridApiKey);
        Request request = new Request();

        try {
             request.setMethod(Method.POST);
             request.setEndpoint("mail/send");
             request.setBody(mail.build());
             Response response = sg.api(request);

             // Log the response
             System.out.println("Status Code: " + response.getStatusCode());
             System.out.println("Body: " + response.getBody());
             System.out.println("Headers: " + response.getHeaders());

        } catch (IOException ex) {
             // Handle the exception (e.g., log it)
             throw ex;
         }
    }
}

 

Conclusion

With just a few configuration steps, you can now:

  • Create a SendGrid API key
  • Integrate the SendGrid SDK into Spring Boot
  • Send professional HTML emails
  • Track the delivery status on the SendGrid Dashboard

SendGrid is an excellent choice if you want reliable email delivery, to avoid spam filters, and have flexible scalability for your application.

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

[References]

View More
TECH

December 1, 2025

CASL – Flexible Authorization for Node.js / React Applications

In modern applications, especially admin dashboards and SaaS systems, detailed role-based access control is essential. If you've ever scattered if (user.role === 'admin') { ... } all over your code, you'll love CASL.

What is CASL?

CASL (Code Access Security Layer) is a JavaScript library for managing authorization in a declarative, maintainable way.

With CASL, you can:

  • Define what users can do on specific resources

  • Use it on both frontend (React/Vue) and backend (Node.js, NestJS)

  • Avoid role-checking spaghetti code

Install CASL in Node.js

yarn add @casl/ability

Define Abilities

Example: a regular user can read products, while an admin can manage everything.

import { AbilityBuilder, Ability } from '@casl/ability';

function defineAbilitiesFor(user) {
    const { can, cannot, build } = new AbilityBuilder(Ability);

    if (user.role === 'admin') {
        can('manage', 'all'); // full access
    } else {
        can('read', 'Product');
        cannot('delete', 'Product');
    }

    return build();
}

Check Permissions with CASL

Create an "ability" object based on the current user's roles/permissions

const ability = defineAbilitiesFor(currentUser);

// Check if the user has permission to "delete" a "Product"

if (ability.can('delete', 'Product')) {
    // allow action
} else {
    // deny access
}

In an Express middleware, this logic is often used to protect routes:

// Middleware to check permissions based on action and subject

function authorize(action, subject) {
    return (req, res, next) => {

        // Create ability based on the logged-in user
        const ability = defineAbilitiesFor(req.user); 
        if (ability.can(action, subject)) {

            // If allowed, proceed to the next middleware or route handler 
            return next();
        }
        res.status(403).send('Forbidden');
    };
}

Why CASL?

  • Clear semantic permission rules (can/cannot)
  • Works on both frontend and backend
  • Easily supports complex conditions (field-level access, ownership, etc.)
  • Keeps business logic clean and centralized

Conclusion

If you're building a system with multiple roles or complex permission rules, give CASL a try.
It helps you write clean, understandable, and reusable access control logic.

If you're seeking a reliable, long-term partner who values collaboration and shared growth, feel free to reach out to us here: Contact ISB Vietnam

[References]

https://casl.js.org/v6/en/


View More
TECH

December 1, 2025

Integration vbs processing with a bat file

I. Overview of Batch and VBScript

Batch file
Batch file (.bat or .cmd): A text file containing Command Prompt (CMD) commands executed sequentially to automate tasks on Windows. Batch can invoke programs, scripts, or execute system commands.

VBScript file (.vbs)
VBScript file (.vbs): A file containing script code written in the VBScript language, commonly used to automate tasks on Windows, such as manipulating files, the registry, or COM applications (e.g., Excel). VBScript is executed via Windows Script Host (wscript.exe or cscript.exe).

A Batch file can call a VBS file to leverage VBScript's advanced processing capabilities, particularly when interacting with COM objects or performing tasks that CMD does not support effectively.

II. Methods to Call a VBS File from a Batch File

There are several ways to call a VBS file from a Batch file, depending on the purpose and how you want the script to run (with a visible interface, in the background, or via the command line). Below are the main methods:

1. Using wscript.exe or cscript.exe
Windows Script Host provides two tools for running VBScript:

wscript.exe
wscript.exe: Executes VBScript in a graphical environment, typically used for scripts with a user interface (e.g., displaying MsgBox).

cscript.exe
cscript.exe: Executes VBScript in a command-line environment, suitable for printing output to the console or running in the background.

Basic Syntax in Batch:


wscript.exe "path_to_vbs_file" [parameter_1] [parameter_2] …
cscript.exe //Nologo "path_to_vbs_file" [parameter_1] [parameter_2] …


//Nologo: Suppresses version and copyright information of cscript.exe, resulting in cleaner output.
[parameter_1] [parameter_2] ...: Parameters passed to the VBS file, accessible in VBScript via WScript.Arguments.

2. Run a VBS file without displaying the Command Prompt window.
If you don’t want the CMD window to appear when running a Batch file (especially when using Task Scheduler), you can use an intermediate VBScript to call the Batch file or hide the window.

Example: Intermediate VBS file (quietrun.vbs) to run a Batch file silently:


' quietrun.vbs
If WScript.Arguments.Count >= 1 Then
ReDim Args(WScript.Arguments.Count - 1)
For i = 0 To WScript.Arguments.Count - 1
Arg = WScript.Arguments(i)
If InStr(Arg, " ") > 0 Then Arg = """" & Arg & """"
Args(i) = Arg
Next
CreateObject("WScript.Shell").Run Join(Args), 0, True
End If


Batch file to call a VBS file to run another VBS file (run_hidden.bat):


@echo off
cscript.exe //Nologo "quietrun.vbs" "test.vbs"


Effectiveness: The test.vbs file runs without displaying the CMD window.
Note: Ensure quietrun.vbs and test.vbs are in the same directory or specify the full path.

Result:

3. Call a VBS script using Task Scheduler

To enable automation, Task Scheduler can be used to execute a batch file. However, it is important to configure the task properly to avoid issues such as COM automation failures.

Task Scheduler Configuration:
1. Open Task Scheduler and create a new task.
2. In the “Actions” tab, add a new action:
Action: Start a program
Program/script: cscript.exe
Add arguments: //Nologo "path\to\test.vbs"
Alternatively, to run via batch file:
Program/script: path\to\run_vbs.bat
3. In the “General” tab, configure the following options:
Run whether user is logged on or not
Run with highest privileges (especially if the VBS requires admin rights)
Note:
If the VBS script interacts with Excel (e.g., opens a workbook), make sure the account running the task has proper COM access permissions. You may need to configure DCOM settings via dcomcnfg.exe.
Always check Task Scheduler history for errors.
Exit code 0x0 indicates success, but you should still verify the actual execution result to ensure the task behaves as expected.

Result:

III. Specific examples

Example 1: Simple VBS file call


' test.vbs
WScript.Echo "Hello from VBScript!"


Batch file (run_vbs.bat) to call it:


@echo off
cscript.exe //Nologo "test.vbs"
pause


Result: Prints "Hello from VBScript!" in the console.
@echo off: hides batch commands for cleaner output.

Example 2: Passing parameters


' test_args.vbs
If WScript.Arguments.Count > 0 Then
WScript.Echo "Argument 1: " & WScript.Arguments(0)
Else
WScript.Echo "No arguments provided."
End If"


batch file:


@echo off
cscript.exe //Nologo "test_args.vbs" "MyParameter"
pause"


Ressult: Prints "Argument 1: MyParameter"

Example 3:
Purpose: Batch file calls a VBS file to:
Retrieve the computer name.
Get free disk space of a specified drive (e.g., C:).
Write the information into a file named system_info.txt.
Requirements:
The batch file passes the drive letter as a parameter (e.g., "C:").
The VBScript returns results and handles errors.
Runs silently (no visible CMD window).
Suitable for automation using Task Scheduler.

1. VBScript File (get_system_info.vbs)
This VBS file retrieves system information and writes it to a text file.


' get_system_info.vbs
Option Explicit

Dim WShell, FSO, ComputerName, DriveLetter, FreeSpace, OutputFile
Dim OutStream, ErrMsg

' Initialize objects
Set WShell = CreateObject("WScript.Shell")
Set FSO = CreateObject("Scripting.FileSystemObject")

' Check arguments
If WScript.Arguments.Count < 2 Then
WScript.StdErr.WriteLine "Error: Missing arguments. Usage: get_system_info.vbs <DriveLetter> <OutputFile>"
WScript.Quit 1
End If

DriveLetter = WScript.Arguments(0) ' Exp: "C:"
OutputFile = WScript.Arguments(1)  ' Exp: "system_info.txt"

' Get computer name
On Error Resume Next
ComputerName = WShell.ExpandEnvironmentStrings("%COMPUTERNAME%")
If Err.Number <> 0 Then
ErrMsg = "Error getting Computer Name: " & Err.Description
WScript.StdErr.WriteLine ErrMsg
WScript.Quit 2
End If
On Error GoTo 0

' Get free space of the drive
On Error Resume Next
Dim Drive
Set Drive = FSO.GetDrive(DriveLetter)
If Err.Number = 0 Then
FreeSpace = Drive.FreeSpace / (1024^3) ' Convert to GB
FreeSpace = FormatNumber(FreeSpace, 2) ' Round to 2 decimal places
Else
ErrMsg = "Error accessing drive " & DriveLetter & ": " & Err.Description
WScript.StdErr.WriteLine ErrMsg
WScript.Quit 3
End If
On Error GoTo 0

' Write results to file
On Error Resume Next
Set OutStream = FSO.CreateTextFile(OutputFile, True)
If Err.Number = 0 Then
OutStream.WriteLine "Computer Name: " & ComputerName
OutStream.WriteLine "Free Space on " & DriveLetter & ": " & FreeSpace & " GB"
OutStream.Close
WScript.Echo "Successfully wrote to " & OutputFile
Else
ErrMsg = "Error writing to " & OutputFile & ": " & Err.Description
WScript.StdErr.WriteLine ErrMsg
WScript.Quit 4
End If
On Error GoTo 0


2. Batch File (run_system_info.bat)
Batch file calls the VBS script and handles the result.


@echo off
setlocal EnableDelayedExpansion

:: Define variables
set "VBS_FILE=get_system_info.vbs"
set "DRIVE=C:"
set "OUTPUT_FILE=system_info.txt"

:: Check if the VBS file exists
if not exist "%VBS_FILE%" (
echo Error: VBS file "%VBS_FILE%" not found.
exit /b 1
)

:: Call the VBS file
echo Calling VBS to collect system info...
cscript.exe //Nologo "%VBS_FILE%" "%DRIVE%" "%OUTPUT_FILE%" > output.txt 2> error.txt
set "EXIT_CODE=%ERRORLEVEL%"

:: Check the result
if %EXIT_CODE% equ 0 (
echo VBS executed successfully.
type output.txt
echo Contents of %OUTPUT_FILE%:
type "%OUTPUT_FILE%"
) else (
echo Error occurred. Exit code: %EXIT_CODE%
type error.txt
)

:: Clean up temporary files
del output.txt error.txt 2>nul

:: End
echo.
echo Done.
pause


Explanation:

Variables:      
VBS_FILE: Name of the VBS file.
DRIVE: Drive to check.
OUTPUT_FILE: Output file.
File Check: Ensure the VBS file exists before calling it.
Calling VBS:
Use cscript.exe //Nologo to run the VBS script.
Redirect standard output (StdOut) to output.txt and standard error (StdErr) to error.txt.
Error Checking: Use %ERRORLEVEL% to determine execution status (0 = success).
Displaying Results:
If successful, display messages from output.txt and the contents of system_info.txt.
If there is an error, display the error code and contents of error.txt.
Cleanup: Delete temporary files output.txt and error.txt

Result:

Cover image created by www.bing.com

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

View More
1 4 5 6 7 8 25
Let's explore a Partnership Opportunity

CONTACT US



At ISB Vietnam, we are always open to exploring new partnership opportunities.

If you're seeking a reliable, long-term partner who values collaboration and shared growth, we'd be happy to connect and discuss how we can work together.

Add the attachment *Up to 10MB