...

What We Think

Blog

Keep up with the latest in technological advancements and business strategies, with thought leadership articles contributed by our staff.
TECH

December 2, 2025

Master Mermaid in VS Code: The Ultimate Guide to Diagrams as Code

If you want to quickly turn ideas into visual diagrams directly inside your code editor—or illustrate complex system architectures without opening external tools—Mermaid is exactly what you need.
This guide will help you understand what Mermaid is, why it’s useful, and how to use it effectively inside Visual Studio Code (VS Code).

What Is Mermaid?

Mermaid is a JavaScript-based tool that allows you to create diagrams using plain text. Instead of dragging and dropping shapes like traditional diagramming tools, you simply write a concise text description and Mermaid renders it into a complete diagram.

Key Advantages of Mermaid

  • Text to Graphics: Write simple syntax and get a visual diagram instantly.
  • Diagrams as Code: Your diagrams stay version-controlled along with source code.
  • Markdown-Friendly: Works seamlessly with README files, technical documentation, and internal wiki pages.
  • No Extra Tools Needed: Everything is generated from text—no manual drawing required.

Think of Mermaid as an “automatic renderer”: you write the script, and it generates the scene for you.

Why Use Mermaid in VS Code?

VS Code is one of the most widely used development environments today. Integrating Mermaid into your workflow brings several benefits:

Faster Workflow

No need to switch between VS Code and external tools like Draw.io or Lucidchart.

Better Visualization

Flowcharts, sequence diagrams, workflows, and architecture diagrams can be previewed directly inside Markdown.

Easy Sharing

Just share your .md or .mmd file—anyone with VS Code can preview the diagram.

How to Use Mermaid in VS Code

Mermaid works in VS Code with the help of a few extensions. Below are the most common and convenient ways to use it.

Install the Required Extension

To render Mermaid diagrams directly inside Markdown (.md) files, you need the extension:

👉 Markdown Preview Mermaid Support

Installation Steps:

  1. Open VS Code.
  2. Press Ctrl + Shift + X to open the Extensions panel.
  3. Search for Markdown Preview Mermaid Support.
  4. Click Install.

Once installed, you can write Mermaid diagrams in any Markdown file like this:

mermaid

flowchart TD

    A[Start] --> B[Process]

    B --> C[End]

Open preview using Ctrl + Shift + V.

Use .mmd Files (Optional)

If you prefer to separate diagrams from your documentation, you can use dedicated Mermaid files (.mmd).

Example diagram.mmd:

sequenceDiagram

    Alice->>Bob: Hello Bob, how are you?

    Bob-->>Alice: I am good, thanks!

To preview .mmd files, you need an additional extension:

👉 Live Preview: Mermaid (or any similar Mermaid preview extension)

After installation:

  • Right-click the .mmd file
  • Select Open Preview

Use Live Preview (Advanced Option)

For real-time editing and instant visual feedback, use the Live Preview feature:

  1. Press Ctrl + Shift + P.
  2. Type Mermaid: Live Preview.
  3. Select the command to open the preview window.

This is especially useful when designing complex diagrams.

Conclusion

Mermaid streamlines the entire diagramming process by transforming manual drawing into automated, text-based visualization. When combined with VS Code, it allows you to:

  • manage diagrams as part of your codebase,
  • maintain version history,
  • and create clear, maintainable technical documentation.

Start by adding a simple diagram to your project’s README.md. Very quickly, you’ll see how powerful and convenient the diagrams-as-code approach can be.

 

Ready to get started?

Contact IVC for a free consultation and discover how we can help your business grow online.

Contact IVC for a Free Consultation

 

View More
TECH

December 2, 2025

Master the Art of Writing Effective QA: The Ultimate Guide

If you’ve ever received a vague or confusing answer to a question, you know how frustrating it can be. In software development, clear communication through Q&A (Questions and Answers) is essential for efficiency, accuracy, and smooth collaboration.

This guide provides a practical framework for writing Q&A that leads to quick, actionable, and reliable responses—based on core principles and common mistakes to avoid.

Start with the Right Mindset: Understand Your Audience

One of the most common mistakes in Q&A communication is assuming that the recipient (your supervisor, client, or customer) fully understands your specific context. In reality, they are not directly involved in your task and cannot see the “hidden part of the iceberg.”

Rule 1: Be Specific, Not Vague

Do not assume reviewers know the underlying technical details or background. Spell out everything relevant to the question.

Rule 2: Don’t Ask “How Should I Solve It?”

When someone assigns you a task, they expect you to research, analyze, and propose a solution, not ask them to do your work for you.
Before sending a QA, investigate thoroughly and prepare your own approach—even if it’s not perfect.

The 5 Essential Components of a Good QA

A well-written QA should contain all the necessary context for the reviewer to answer without needing follow-up questions.
Include the following five elements:

1. Main Topic / Purpose

What is the question about? State the subject clearly.

2. Current Status

Describe the current situation or what you have observed.

3. Affected Scope

Specify what parts of the system are involved:
source code, modules, documents, features, etc.

4. The Question or Confirmation Needed

Ask the exact question you need answered. Avoid vague or multi-level questions.

5. Your Proposed Solution

Provide your own idea, direction, or hypothesis, even if tentative.
This demonstrates effort and helps reviewers validate quickly.

Common Pitfalls: Examples of Incomplete Descriptions

Poorly written Q&A often contain vague wording, unclear references, overly complex logic, or missing information. Below are typical examples:

Example Phrase / Type

The Problem

What You Should Confirm

Missing Description

It is unclear where the output should be displayed.

Confirm whether the output should be written to a file or displayed on the screen.

Complex Logic

Complicated logic increases the risk of misunderstanding or incorrect assumptions.

Break down the complex part clearly and confirm each piece separately.

No Clear Deadline

The deadline is not specified.

Confirm the exact date and time for submission/reporting.

No Clear Outcome

The expected result of the research or task is unknown.

Confirm when and what research output should be delivered.

Vague Reference

There may be multiple interpretations of “previous processing.”

Confirm exactly which past procedure or behavior is being referenced.

Conclusion

By structuring your Q&A with clear context (Current Status, Scope, and Proposed Solution) and avoiding vague or overly complex phrasing, you make it easy for reviewers to provide accurate answers immediately.

 

Ready to get started?

Contact IVC for a free consultation and discover how we can help your business grow online.

Contact IVC for a Free Consultation

 

View More
TECH

December 2, 2025

Common Security Mistakes Developers Often Make—and How to Avoid Them

Security is one of the most critical aspects of software development, yet it often remains an afterthought. In fast-paced "move fast and break things" environments where deadlines are tight and feature delivery takes priority, security vulnerabilities can silently slip into the codebase.

These weaknesses are more than just bugs; they are open doors leading to data breaches, system compromise, financial loss, and catastrophic reputational damage.

In this guide, we explore the top 10 security mistakes developers make, the mechanics behind them, and the actionable best practices to fix them.


Hardcoding Sensitive Information

One of the most frequent and dangerous mistakes is embedding secrets directly into source code. This includes:

  • API keys

  • Database connection strings

  • Encryption keys and salts

  • Cloud credentials (AWS/Azure/GCP keys)

Developers often do this for quick testing or convenience, but if this code is pushed to a public repository (like GitHub), automated bots will scrape and exploit these credentials within seconds.

How to Fix It

  • Use Secret Managers: Utilize dedicated services like AWS Secrets Manager, Azure Key Vault, or HashiCorp Vault.

  • Environment Variables: Store sensitive data in .env files (and ensure .env is added to your .gitignore) or environment variables, never in the actual code files.

  • Automated Scanning: Implement tools like GitGuardian or TruffleHog in your CI/CD pipeline to block commits containing secrets.


Insecure Input Handling (Injection Attacks)

Storing Passwords in Plain Text

Failing to properly sanitize or validate user input is the root cause of injection attacks.

    • SQL Injection (SQLi): Attackers manipulate database queries.

    • Command Injection: Executing arbitrary system commands.

    • NoSQL Injection: Manipulating document-oriented database queries.

If an application accepts input blindly, attackers can trick the backend into leaking data or granting administrative access.

How to Fix It

  • Parameterized Queries: Always use prepared statements or parameterized queries. Never concatenate strings to build SQL queries.

  • Use Modern ORMs: Frameworks like Entity Framework, Hibernate, or Prisma handle sanitization automatically—if used correctly.

  • Input Validation: Validate all input against a strict allowlist (whitelisting) rather than a denylist. Ensure data conforms to expected types (e.g., ensure an age field is an integer).


Missing or Weak Authentication

Authentication is the gatekeeper of your application. Weak implementation makes it trivial for attackers to break in. Common pitfalls include:

  • No rate limiting (allowing brute-force attacks).

  • Permitting weak passwords (e.g., "password123").

  • Hardcoded administrative credentials.

How to Fix It

  • MFA is Mandatory: Implement Multi-Factor Authentication (MFA) wherever possible.

  • Rate Limiting: Use tools like Redis or API Gateways to throttle login attempts and lock accounts after repeated failures.

  • Identity Providers: Don't roll your own crypto. Use established providers like Auth0, AWS Cognito, or Okta.


Broken Access Control (IDOR)

A system may verify who the user is (authentication) but fail to verify what they are allowed to do (authorization).

A common manifestation is Insecure Direct Object References (IDOR). For example, a user visits /invoice/100, changes the URL to /invoice/101, and sees someone else's invoice because the server didn't check ownership.

How to Fix It

  • Server-Side Checks: Never rely on the frontend to hide buttons. Validate permissions on every API request on the server.

  • Principle of Least Privilege (POLP): Users should only have the bare minimum permissions necessary to perform their tasks.

  • Role-Based Access Control (RBAC): Implement strict roles (Admin, Editor, Viewer) and test boundaries regularly.


Improper Error Handling

Detailed error messages are great for debugging but dangerous in production. Revealing stack traces, database schema details, or library versions gives attackers a blueprint of your system's architecture.

How to Fix It

  • Generic User Messages: Display "An unexpected error occurred" to the user, rather than "SQL Syntax Error at line 42."

  • Secure Logging: Log the detailed stack traces internally to a secure monitoring system (like Datadog or ELK Stack), but sanitize logs to ensure no PII (Personally Identifiable Information) or secrets are recorded.


Storing Passwords in Plain Text (or Weak Hashing)

Storing raw passwords is a catastrophic failure. If your database is compromised, every user account is immediately stolen. Even using outdated hashing algorithms like MD5 or SHA-1 is effectively the same as plain text due to modern computing power.

How to Fix It

  • Strong Hashing: Use adaptive hashing algorithms specifically designed for passwords, such as bcrypt, Argon2, or scrypt.

  • Salting: Ensure every password hash has a unique, random "salt" to prevent Rainbow Table attacks.

  • NIST Guidelines: Do not force periodic password rotation (which leads to weak passwords). Instead, check new passwords against lists of known breached passwords (e.g., via the Have I Been Pwned API).


Not Using HTTPS Everywhere

Sending data over unencrypted HTTP exposes users to Man-in-the-middle (MITM) attacks. Attackers can intercept session cookies, login credentials, and personal data.

How to Fix It

  • HTTPS Everywhere: Enable TLS/SSL for all environments, including development and staging. Services like Let’s Encrypt make this free and easy.

  • HSTS: specific headers (HTTP Strict Transport Security) to force browsers to interact with your site only using HTTPS.

  • Secure Cookies: Flag all cookies as Secure (HTTPS only) and HttpOnly (inaccessible to JavaScript).


Misconfigured Cloud Services

The cloud is powerful, but complex. A single toggle can leave a database exposed to the entire internet. Common issues include:

  • Publicly accessible AWS S3 buckets containing private data.

  • Open database ports (0.0.0.0/0).

  • Overly permissive IAM roles (e.g., giving an EC2 instance full Admin access).

How to Fix It

  • Infrastructure as Code (IaC): Use tools like Terraform or CloudFormation to define infrastructure securely and consistently, preventing "click-ops" errors.

  • Cloud Security Posture Management (CSPM): Use tools like AWS Trusted Advisor, Prowler, or SonarCloud to automatically scan for misconfigurations.


Using Outdated Libraries and Dependencies

Modern software is built on the backs of open-source libraries. However, Software Supply Chain attacks are on the rise. If you use a library with a known vulnerability, your application inherits that vulnerability.

How to Fix It

  • SCA Tools: Use Software Composition Analysis tools like Dependabot, Snyk, or OWASP Dependency-Check.

  • Regular Audits: Automate dependency updates. Do not use "abandonware" libraries that haven't been updated in years.

  • Lock Files: Use package-lock.json or yarn.lock to ensure consistent versions across environments.


Lack of Security Testing

Lack of Security Testing

Many teams rely solely on functional testing ("Does it work?") and skip security testing ("Is it safe?"). Security cannot be something you check only one week before launch.

How to Fix It

  • Shift Left: Integrate security early in the development lifecycle.

  • SAST & DAST: Use Static Application Security Testing (analyzing code) and Dynamic Application Security Testing (simulating attacks on the running app).

  • Penetration Testing: Hire ethical hackers to test your system periodically.


Conclusion

Security isn’t a one-time checkbox—it’s a continuous mindset that must be woven into the fabric of your development culture (DevSecOps).

By understanding these common mistakes and implementing the right tools, you can build software that is not only functional but resilient against attacks.

The Golden Rules:

  1. Trust no input.

  2. Encrypt everything.

  3. Grant the least privilege necessary.

  4. Automate your security checks.

Building secure software requires vigilance, but the cost of prevention is always lower than the cost of a breach.

References

https://cheatsheetseries.owasp.org/

https://owasp.org/www-project-top-ten/

https://owasp.org/www-project-web-security-testing-guide/

https://cheatsheetseries.owasp.org/cheatsheets/Secrets_Management_Cheat_Sheet.html

https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html

https://cheatsheetseries.owasp.org/cheatsheets/Authentication_Cheat_Sheet.html

https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html

https://cheatsheetseries.owasp.org/cheatsheets/Authorization_Cheat_Sheet.html

https://cheatsheetseries.owasp.org/cheatsheets/Transport_Layer_Protection_Cheat_Sheet.html

https://owasp.org/www-project-dependency-check/

https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/

https://www.freepik.com/free-vector/hacker-activity-concept_8269019.htm

Ready to get started?

Contact IVC for a free consultation and discover how we can help your business grow online.

Contact IVC for a Free Consultation
View More
TECH

December 2, 2025

Master DynamoDB Pagination in C#: The Ultimate Guide to Navigation

Traditional offset-based pagination (using SKIP and TAKE) isn't viable in DynamoDB due to performance constraints. Instead, DynamoDB uses cursor-based pagination through LastEvaluatedKey, which acts as a pointer to the next page.

While navigating "Next" is straightforward, implementing a full set of controls—First, Previous, Next, and Last—requires a deeper understanding of DynamoDB's architecture.

In this guide, we’ll implement a complete pagination solution in C#.

Introduction to DynamoDB Pagination

Amazon DynamoDB is a fully managed NoSQL database designed for fast, scalable, and predictable performance. When querying large datasets, DynamoDB automatically paginates results and returns up to 1 MB of data per request.

Instead of using offset-based pagination like SQL, DynamoDB uses a special value called LastEvaluatedKey. Each query response includes:

  • A page of items.

  • A LastEvaluatedKey (if more items exist).

To retrieve the next page, the client passes this key back to DynamoDB using the ExclusiveStartKey parameter. Because DynamoDB does not support random access to pages, implementing controls like Previous and Last requires applying cursor logic or manipulating the sort order.

When to Use Pagination in DynamoDB

Pagination is essential when dealing with:

  • Large datasets: Fetching thousands of items in a single request is inefficient and costly.

  • User interfaces: UI components (dashboards, tables) need friendly controls.

  • APIs returning limited result sets: Public endpoints must paginate to avoid timeouts.

  • Reducing Read Costs: Controlled queries reduce Read Capacity Units (RCU) consumption.

  • High-traffic systems: Fetching data incrementally prevents backend resource exhaustion.

The Secret Weapon: ScanIndexForward

DynamoDB allows you to navigate forward easily. However, it does not natively support "Previous" or "Last". To solve this, we utilize the ScanIndexForward parameter.

  • ScanIndexForward = true (Default): Returns items in ascending order.

  • ScanIndexForward = false: Returns items in descending order.

This feature allows us to:

Efficiently get the "Last Page"

Querying in descending order gives you the newest/last items first.

Key Concept: The first page of a descending query is effectively the last page of an ascending query.

Support Backward Pagination

When moving backward, using reverse sort order allows us to fetch items preceding the current batch without scanning the entire table.

Note: This technique requires your Table or GSI to have a Sort Key defined.

Implementing Pagination (C# Example)

Below is a reusable pagination structure supporting First, Next, Previous, and Last.

Important: Handling State in Web APIs

Before looking at the code, note that in a stateless environment (like a REST API), you cannot store the PagingState object in server memory. You must serialize the state (e.g., to a Base64 JSON string) and send it to the client. The client must then send this token back in the next request.

Models

C#
using Amazon.DynamoDBv2.Model;

// Represents the result of a paginated query
public class PageResult
{
    // List of DynamoDB items returned for this page
    public List<Dictionary<string, AttributeValue>> Items { get; set; }

    // Cursor pointing to the next page (null if no more pages)
    public Dictionary<string, AttributeValue>? NextKey { get; set; }

    // Number of items per page
    public int PageSize { get; set; }
}

// Stores the pagination state for navigating Next/Previous
public class PagingState
{
    // Stack of previous page tokens -> used to move backwards
    // In a Web API, this list should be serialized and sent to the client
    public Stack<Dictionary<string, AttributeValue>?> PrevTokens { get; set; } = new();

    // Token used to load the current page
    public Dictionary<string, AttributeValue>? CurrentToken { get; set; }

    // Token used to load the next page
    public Dictionary<string, AttributeValue>? NextToken { get; set; }
}

Base Query Method

This generic method handles the core DynamoDB query logic.

C#
public async Task<PageResult> QueryPageAsync(
    string userType,
    Dictionary<string, AttributeValue>? startKey,
    int pageSize,
    bool scanForward = true)
{
    var request = new QueryRequest
    {
        TableName = "Users",
        // Partition key condition
        KeyConditionExpression = "UserType = :u",
        ExpressionAttributeValues = new Dictionary<string, AttributeValue>
        {
            {":u", new AttributeValue { S = userType }}
        },
        // Cursor for next page (null for first page)
        ExclusiveStartKey = startKey,
        // Maximum items to return
        Limit = pageSize,
        // Sorting direction: true = ascending, false = descending
        ScanIndexForward = scanForward
    };

    var response = await _client.QueryAsync(request);

    return new PageResult
    {
         Items = response.Items,
         NextKey = response.LastEvaluatedKey,
         PageSize = pageSize
    };
}

First Page

C#
public async Task<PageResult> GetFirstPageAsync(string userType, int pageSize, PagingState state)
{
    // Clear backward history as we are starting over
    state.PrevTokens.Clear();
    state.CurrentToken = null;

    // Load first page in ascending order
    var result = await QueryPageAsync(userType, null, pageSize, scanForward: true);
    // Store next page cursor
    state.NextToken = result.NextKey;
    return result;
}

Next Page

C#
public async Task<PageResult> GetNextPageAsync(string userType, int pageSize, PagingState state)
{
    // Check if there are more pages
    if (state.NextToken == null)
        return new PageResult { Items = new(), NextKey = null };

    // Save current token to history so we can navigate backwards later
    state.PrevTokens.Push(state.CurrentToken);

    // Move forward
    state.CurrentToken = state.NextToken;
    // Load next page
    var result = await QueryPageAsync(userType, state.CurrentToken, pageSize);
    state.NextToken = result.NextKey;
    return result;
}

Previous Page

C#
public async Task<PageResult> GetPreviousPageAsync(string userType, int pageSize, PagingState state)
{
    // If no history, default to First Page
    if (!state.PrevTokens.Any())
        return await GetFirstPageAsync(userType, pageSize, state);

    // Retrieve the most recent previous token
    var previousKey = state.PrevTokens.Pop();
    // Update current cursor
    state.CurrentToken = previousKey;
    // Load page using the retrieved token
    var result = await QueryPageAsync(userType, previousKey, pageSize);
    state.NextToken = result.NextKey;
    return result;
}

Last Page

This is where the magic happens using ScanIndexForward = false.

C#
public async Task<PageResult> GetLastPageAsync(string userType, int pageSize)
{
    // Reverse the sort order so newest items come first
    // This effectively fetches the "Last Page" immediately
    var result = await QueryPageAsync(
        userType,
        startKey: null,
        pageSize: pageSize,
        scanForward: false); // Critical: Read backwards

    // Reorder items for UI display (so they appear Ascending within the page)
    result.Items.Reverse();

    return result;
}

Note on Navigation: Jumping directly to the "Last Page" isolates the user from the previous navigation history. The PrevTokens stack will not automatically know how to go back to the "Second to Last" page. In most UI implementations, clicking "Last" resets the navigation context.

Conclusion

DynamoDB’s cursor-based pagination offers a scalable and cost-efficient alternative to offset-based pagination. While paging forward is simple, paging backward and jumping to the last page requires creative use of sorting.

By leveraging ScanIndexForward = false, developers can:

  1. Retrieve the last page instantly (O(1) complexity).

  2. Reverse the paging direction efficiently.

  3. Reduce unnecessary read costs.

With the C# implementation provided, you now have a robust starting point for building user-friendly tables on top of DynamoDB.

References

These authoritative resources help deepen your understanding:

 

Ready to get started?

Contact IVC for a free consultation and discover how we can help your business grow online.

Contact IVC for a Free Consultation
View More
TECH

December 2, 2025

JWT vs. OAuth 2.0: The Ultimate Guide to Secure Authentication

In today’s API-driven world, authentication and authorization are foundational to secure application design. Modern systems—especially microservices, mobile apps, and single-page applications (SPAs)—often rely on JWT vs OAuth 2.0 to handle identity.

However, these two terms often appear together, leading to confusion. Are they competitors? Do they do the same thing?

The short answer is: No. They serve different purposes and solve different problems. This article will break down what each technology does, how they work, and when to use them effectively. 

1. What Is JWT (JSON Web Token)?

A JWT (JSON Web Token) is a compact, stateless token format used to transmit claims between parties securely.

It is an encoded (not encrypted) string containing three parts, separated by dots (.):

  1. Header

  2. Payload

  3. Signature

Example Structure: header.payload.signature

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.
eyJ1c2VyX2lkIjoxMjMsInJvbGUiOiJhZG1pbiJ9.
dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk

 

Key Characteristics of JWT:

  • Stateless: No need for server-side session storage (like Redis or SQL).

  • Digitally Signed: Verified using HMAC (symmetric) or RSA/ECDSA (asymmetric).

  • Custom Claims: Can contain User ID, roles, and permissions.

  • Performance: Fast to verify since no database lookup is needed.

⚠️ Common Misunderstanding:

JWTs are not encrypted by default. They are only base64-encoded. Never put sensitive information (like passwords or social security numbers) in a JWT payload unless you use JWE (JSON Web Encryption).

2. What Is OAuth 2.0?

OAuth 2.0 is an industry-standard authorization framework. It allows applications to access resources on behalf of a user without sharing credentials.

Real-world Example:

When you see “Login with Google,” that is OAuth 2.0. Google issues access tokens so your app can read the user’s email or profile without ever seeing their Google password.

Core OAuth 2.0 Roles:

  • Resource Owner: The user.

  • Client: The application requesting access.

  • Authorization Server: The server issuing tokens (e.g., Google, Okta, Auth0).

  • Resource Server: The API providing protected data.

What OAuth DOES:

  • Delegates access.

  • Defines token generation flows (Grant Types).

  • Manages user consent.

What OAuth DOESN’T Do:

  • It does not define the token structure (it can use random strings or JWTs).

  • It does not handle user authentication natively (that is handled by OpenID Connect).

3. JWT vs OAuth 2.0: The Key Differences

To clear up the confusion, here is a direct comparison:

Feature JWT OAuth 2.0
Purpose Token format (Container) Authorization framework (Protocol)
Handles Login? ❌ No ❌ No (OIDC does)
Stateless? Yes Depends on implementation
Token Type Self-contained Any (Random string or JWT)
Primary Use Information exchange Delegated authorization

Important Note:

  • JWT is NOT an authentication framework.

  • OAuth 2.0 is NOT an identity framework.

  • To authenticate users using OAuth, you need OpenID Connect (OIDC) on top.

4. How They Work Together

OAuth 2.0 can issue many token formats. JWT is simply one of them.

In modern systems, OAuth 2.0 access tokens are usually implemented as JWTs because:

  1. Self-contained: The Resource Server (API) can validate the token without calling the Authorization Server.

  2. Performance: Reduces network latency and database lookups.

  3. Scalability: Ideal for distributed microservices.

The Typical Flow:

  1. User logs in via an OAuth 2.0 Authorization Server.

  2. Server issues a JWT Access Token (+ optional Refresh Token).

  3. Client sends the JWT to the API on every request.

  4. API verifies the JWT signature and claims locally.

JWT Access Token

  • Description: A sequence diagram showing the flow: Client -> Auth Server (returns JWT) -> Client -> API (validates JWT).

  • Alt Text: OAuth 2.0 flow using JWT Access Tokens.

5. Access Token vs. Refresh Token

  • Access Token: Short-lived (e.g., 15 minutes). Sent with every API request. Usually a JWT.

  • Refresh Token: Long-lived (days/weeks). Used to obtain new access tokens when the old one expires. Never share this with the Resource Server.

6. Security Best Practices

Both technologies are powerful but dangerous if misused. Follow these rules to secure your app.

JWT Best Practices:

  • Short Expiration: Keep exp time short (5-15 mins).

  • Secure Storage: Store tokens in HttpOnly, Secure Cookies (not localStorage) to prevent XSS attacks.

  • Algorithm: Use asymmetric signing (RS256) for distributed systems.

OAuth 2.0 Best Practices:

  • PKCE: Always use Authorization Code Flow with PKCE for mobile and SPAs.

  • No Implicit Flow: Never use the deprecated Implicit Flow.

  • Least Privilege: Request only the scopes you absolutely need.

  • Token Rotation: Rotate refresh tokens upon every use to detect theft.

Conclusion

JWT and OAuth 2.0 are core technologies in modern architecture. Although they are often used together, they solve distinct problems:

  • JWT is a format for securely transmitting information.

  • OAuth 2.0 is a protocol for delegating access.

Understanding their roles will help you build secure, scalable identity systems for web, mobile, and distributed applications.

References

Ready to get started?

Contact IVC for a free consultation and discover how we can help your business grow online.

Contact IVC for a Free Consultation
View More
TECH

December 2, 2025

Using ActiveReportsJS in Next.js to Generate PDF Files from JSON Templates

Modern web applications often need to export reports as PDF files. Instead of building PDF layouts manually, ActiveReportsJS by Mescius allows developers to design report templates using a JSON format, then render these templates inside Next.js using dynamic parameters and data sources.

1. Introduction

In this article, we will explore how to use ActiveReportsJS inside a Next.js project to generate PDF documents on the server side. The workflow is simple: prepare a JSON report template, pass data and parameters to it, and let ActiveReportsJS create the final PDF file.

This approach is useful for invoices, summaries, forms, and any business reports that need flexible formatting. The content below uses common technical vocabulary and is designed for developers of all levels.

2. How ActiveReportsJS Works in a Next.js Environment

ActiveReportsJS is a client-side and server-side reporting engine. When used with Next.js API routes, it can render PDF files without exposing sensitive logic to the browser. The typical flow includes:

  • Loading a JSON report template (.rdl.json)
  • Passing parameters and dynamic data
  • Rendering the report into a PDF stream
  • Returning the file to the user

Below is a simplified example of how the process looks.

2.1 JSON Template Structure

A basic ActiveReportsJS template includes layout, text boxes, and bindings. Templates are normally created using the ActiveReportsJS Designer tool.

{
  "Name": "StudentReport",
  "Type": "report",
  "DataSources": [
    {
      "Name": "ReportDataSource",
      "ConnectionProperties": {
        "DataProvider": "JSON",
        "ConnectString": "jsondata="
      }
    }
  ],
  "DataSets": [
    {
      "Name": "ReportDataSet",
      "Query": {
        "DataSourceName": "ReportDataSource",
        "CommandText": "$.value[*]"
      },
      "Fields": [
        {
          "Name": "qrcd",
          "DataField": "qrcd",
          "Type": "String"
        },
        {
          "Name": "studentId",
          "DataField": "studentId",
          "Type": "String"
        },
        {
          "Name": "ticketInfo",
          "DataField": "ticketInfo",
          "Type": "Object"
        }
      ]
    }
  ],
  "Page": {
    "PageWidth": "8.5in",
    "PageHeight": "11in",
    "Margins": {
      "Top": "0.5in",
      "Bottom": "0.5in",
      "Left": "0.5in",
      "Right": "0.5in"
    }
  },
  "Body": {
    "ReportItems": [
      {
        "Type": "textbox",
        "Name": "QrcdValue",
        "Value": "=Fields!qrcd.Value",
        "Style": {
          "FontFamily": "Noto Sans JP",
          "FontSize": "10pt"
        },
        "Top": "0.9in",
        "Left": "2in",
        "Width": "5.5in",
        "Height": "0.25in"
      },
      {
        "Type": "textbox",
        "Name": "TicketIdValue",
        "Value": "=Fields!ticketInfo.Value.ticketId",
        "Style": {
          "FontFamily": "Noto Sans JP",
          "FontSize": "10pt"
        },
        "Top": "2.3in",
        "Left": "2in",
        "Width": "5.5in",
        "Height": "0.25in"
      }
    ],
    "Height": "9.8in"
  }
}

2.2 Basic PDF Generation with Parameters

The simplest example: using parameters to inject values into the report template.

// pages/api/basic-report.ts
import { NextApiRequest, NextApiResponse } from 'next';
import { 
  outputPDFByARJ, 
  createParameter
} from '@/common/utils/sample-active-report-js';

export default async function handler(req: NextApiRequest, res: NextApiResponse) {
  try {
    const { title, author } = req.query;

    // Create simple parameters
    const parameters = [
      createParameter('reportTitle', title || 'Default Report Title'),
      createParameter('authorName', author || 'Unknown Author'),
      createParameter('generationDate', new Date().toLocaleDateString('ja-JP'))
    ];

    // Generate PDF using the utility function
    await outputPDFByARJ({
      jsonUrl: '/data/basic-report-template.json',
      fileName: `basic-report-${Date.now()}.pdf`,
      parameters
    });

  } catch (error) {
    console.error('PDF Generation Error:', error);
    res.status(500).json({ error: 'Failed to generate PDF' });
  }
}

2.3 Working with Complex Data Structures

When dealing with complex business data containing nested objects, ActiveReportsJS requires careful handling. The template in our example expects data with nested structures like ticket information, status, and procedure details.

// pages/api/generate-student-report.ts
import { NextApiRequest, NextApiResponse } from 'next';
import { 
  outputPDFByARJ, 
  createParameter,
  flattenNestedObjectToParameters 
} from '@/common/utils/sample-active-report-js';

export default async function handler(req: NextApiRequest, res: NextApiResponse) {
  try {
    const { studentId, orgCode } = req.query;

    // Sample complex data structure matching the report template
    const reportData = {
      value: [
        {
          qrcd: `QR${Date.now()}`,
          studentId: studentId || "STU001",
          orgCode: orgCode || "ORG001",
          ticketInfo: {
            ticketId: `TICKET${studentId || '001'}`,
            seatNumber: "A-001",
            facultyName: "Faculty of Arts",
            departmentName: "Design",
            categoryName: "General Admission",
            area: "Main Campus"
          },
          numberInfo: {
            number: "2024001",
            appNumber: `APP${studentId || '001234'}`,
            schoolName: "Sample High School"
          },
          resultInfo: {
            status: "Passed",
            comment: "Congratulations on your success!"
          },
          procedures: {
            fee: "$3,000",
            startDate: "2024-04-01",
            endDate: "2024-04-30"
          },
          qrCodeReference: {
            comment: "Scan for details",
            destAddress: "contact@example.com",
            destName: "Admission Office"
          }
        }
      ]
    };

    // Method 1: Using createParameter for simple values
    const simpleParams = [
      createParameter('reportTitle', 'Admission Certificate'),
      createParameter('generatedDate', new Date().toISOString().split('T')[0])
    ];

    // Method 2: Using flattenNestedObjectToParameters for complex objects
    const nestedData = {
      student: {
        name: "John Doe",
        studentId: "S2024001"
      },
      contact: {
        email: "john.doe@example.com",
        phone: "+1-555-0123"
      }
    };
    
    const flattenedParams = flattenNestedObjectToParameters(nestedData);

    // Combine all parameters
    const allParameters = [...simpleParams, ...flattenedParams];

    // Generate PDF using the utility function
    await outputPDFByARJ({
      jsonUrl: '/data/student-report-template.json', // Updated generic filename
      fileName: `report-${Date.now()}.pdf`,
      parameters: allParameters,
      data: reportData
    });

  } catch (error) {
    console.error('PDF Generation Error:', error);
    res.status(500).json({ error: 'Failed to generate PDF' });
  }
}

2.4 Client-Side Integration with React Components

Here's how to integrate PDF generation into your React components:

// components/ReportGenerator.tsx
import { useState } from 'react';
import { 
  outputPDFByARJ, 
  createParameter,
  flattenNestedObjectToParameters 
} from '@/common/utils/sample-active-report-js';

// Define interface for data structure
interface ReportData {
  qrcd: string;
  studentId: string;
  orgCode: string;
  ticketInfo: {
    ticketId: string;
    seatNumber: string;
    facultyName: string;
    departmentName: string;
    categoryName: string;
    area: string;
  };
  numberInfo: {
    number: string;
    appNumber: string;
    schoolName: string;
  };
  resultInfo: {
    status: string;
    comment: string;
  };
  procedures: {
    fee: string;
    startDate: string;
    endDate: string;
  };
  qrCodeReference: {
    comment: string;
    destAddress: string;
    destName: string;
  };
}

export const ReportGenerator = () => {
  const [isGenerating, setIsGenerating] = useState(false);
  const [studentId, setStudentId] = useState('');
  const [orgCode, setOrgCode] = useState('');

  const generateReport = async () => {
    if (!studentId || !orgCode) {
      alert('Please enter Student ID and Org Code');
      return;
    }

    setIsGenerating(true);
    try {
      // Prepare complex nested data
      const data: ReportData = {
        qrcd: `QR${Date.now()}`,
        studentId,
        orgCode,
        ticketInfo: {
          ticketId: `TICKET${studentId}`,
          seatNumber: 'A-001',
          facultyName: 'Faculty of Arts',
          departmentName: 'Design',
          categoryName: 'General Admission',
          area: 'Main Campus'
        },
        numberInfo: {
          number: studentId,
          appNumber: `APP${studentId}`,
          schoolName: 'Sample High School'
        },
        resultInfo: {
          status: 'Passed',
          comment: 'Congratulations on your success.'
        },
        procedures: {
          fee: '$3,000',
          startDate: '2024-04-01',
          endDate: '2024-04-30'
        },
        qrCodeReference: {
          comment: 'Access Link',
          destAddress: 'contact@example.com',
          destName: 'Office'
        }
      };

      // Create parameters using utility functions
      const parameters = [
        createParameter('reportTitle', 'Admission Certificate'),
        createParameter('generationTime', new Date().toLocaleString('en-US')),
        ...flattenNestedObjectToParameters({
          metadata: {
            version: '1.0',
            generatedBy: 'Report System'
          }
        })
      ];

      // Generate PDF with complex data
      await outputPDFByARJ({
        jsonUrl: '/data/student-report-template.json',
        fileName: `admission-certificate-${studentId}.pdf`,
        parameters,
        data: { value: [data] } // Wrap in { value: [...] } as expected by template
      });

      alert('PDF generated successfully!');
    } catch (error) {
      console.error('Generation failed:', error);
      alert('Failed to generate PDF. Please try again.');
    } finally {
      setIsGenerating(false);
    }
  };

  return { generateReport };
}

2.5 Understanding the outputPDFByARJ Function

Before diving into complex examples, let's understand the outputPDFByARJ function - this is the main utility function for generating PDFs from JSON templates:

/**
 * Generates and downloads a PDF file from a report definition JSON (RDL).
 * This function fetches the report layout, applies specified parameters, runs the report,
 * and then exports the result as a PDF, triggering a download in the browser.
 */
export const outputPDFByARJ = async ({
  jsonUrl,
  fileName,
  parameters,
  password,
  data,
}: OutputPdfParams): Promise => {
  // Check if running in browser environment
  if (typeof window === 'undefined') {
    console.log('PDF Output Error: Not running in browser environment');
    return;
  }

  // Helper: Temporarily neutralize i18n mutating methods to avoid vendor side-effects
  const runWithI18nPatched = async (work: () => Promise) => {
    // Save original methods
    const i18nAny = i18n as unknown as Record<string, unknown>;
    const original = {
      use: i18nAny.use,
      init: i18nAny.init,
      changeLanguage: i18nAny.changeLanguage,
      addResourceBundle: i18nAny.addResourceBundle,
      addResources: i18nAny.addResources,
      addResource: i18nAny.addResource,
      loadLanguages: i18nAny.loadLanguages,
      loadNamespaces: i18nAny.loadNamespaces,
    };

    // Patch to no-ops (non-mutating)
    i18nAny.use = () => i18n;
    i18nAny.init = () => i18n;
    i18nAny.changeLanguage = () => i18n.language;
    i18nAny.addResourceBundle = () => undefined;
    i18nAny.addResources = () => undefined;
    i18nAny.addResource = () => undefined;
    i18nAny.loadLanguages = () => undefined;
    i18nAny.loadNamespaces = () => undefined;

    try {
      await work();
    } finally {
      // Restore originals
      i18nAny.use = original.use;
      i18nAny.init = original.init;
      i18nAny.changeLanguage = original.changeLanguage;
      i18nAny.addResourceBundle = original.addResourceBundle;
      i18nAny.addResources = original.addResources;
      i18nAny.addResource = original.addResource;
      i18nAny.loadLanguages = original.loadLanguages;
      i18nAny.loadNamespaces = original.loadNamespaces;
    }
  };

  await runWithI18nPatched(async () => {
    // Dynamically import ActiveReports to avoid SSR issues
    const { Core, PdfExport } = await import(
      './wrappers/activereports-wrapper'
    );

    // Fetch the report layout from the provided URL
    const response = await fetch(jsonUrl);
    const jsonData = await response.json();

    // Modify ConnectString with JSON data
    jsonData.DataSources[0].ConnectionProperties.ConnectString = `jsondata=${JSON.stringify(data)}`;

    const fontsToRegister = [
      {
        name: 'IPA EXG',
        source: '/fonts/ipaexg.ttf',
      },
      {
        name: 'IPA EXM',
        source: '/fonts/ipaexm.ttf',
      },
    ];
    await Core.FontStore.registerFonts(...fontsToRegister);

    // Load the report definition
    const report = new Core.PageReport();
    await report.load(jsonData);

    if (!data) {
      // Normalize and apply parameters to the report
      const normalizedParameters = parameters.map((param) => ({
        ...param,
        Value: normalizeParameterValue(param.Value),
      }));

      // Apply normalized parameters to the report
      await report.reportParameters.applySteps(normalizedParameters as any);
    }

    // Run the report to generate the document
    const pageDocument = await report.run();

    // Export the document to a PDF blob and initiate download
    const pdfSettings = password
      ? { security: { userPassword: password } }
      : undefined;
    const pdfBlob = await PdfExport.exportDocument(pageDocument, pdfSettings);
    pdfBlob.download(fileName);
  });
};

Parameters of the outputPDFByARJ function:

  • jsonUrl: Path to the JSON template file
  • fileName: Output PDF file name
  • parameters: Array of parameters to pass to the report
  • password (optional): Password to protect the PDF
  • data: JSON data to bind to the report template

The function will:

  1. Fetch the JSON template from the URL
  2. Apply parameters and data
  3. Render the report into a PDF
  4. Trigger download in the browser

3. Best Practices When Using ActiveReportsJS with Next.js

  • Store report templates in a secure directory of your project.
  • Validate all parameters to avoid unwanted data injection.
  • Use API routes to protect server-side rendering logic.
  • Do not expose confidential structures from real projects.
  • Always review generated code before deployment.

4. Additional Considerations

  • Using ActiveReportsJS together with i18next / other i18n libraries:
    If your Next.js project already uses i18next (or similar JavaScript localization / translation libraries), be aware that there can be potential conflicts — especially if your report templates or the rendering logic depend on global locale settings, overridden prototypes, or modifications to built‑in objects.
    To avoid unexpected behavior (e.g. locale/format overrides, translation JSON interfering with report JSON, or i18next initialization affecting global state), you should isolate the report‑rendering context from the rest of your app. For example: load and render the JSON template without i18next’s context, or ensure i18next is not active / initialized when generating PDF on the server.
  • Licensing: Free for development / evaluation — but production requires a valid license key:
    ActiveReportsJS provides a “trial / evaluation mode” which allows you to develop locally without a license key. However, in this mode, exported reports will contain a watermark and the standalone designer has a limited evaluation period. For more details, see the official licensing documentation.
    When you deploy to staging or production (or distribute your application), you need to purchase the appropriate ActiveReportsJS license (e.g., a distribution / deployment license), generate a distribution key, and configure your application to set that license key (typically via Core.setLicenseKey(...) or similar). This will remove the watermark and ensure compliance with licensing terms.

5. Conclusion

Using ActiveReportsJS with Next.js provides a clean and scalable way to generate PDF files from JSON templates. By combining parameters, dynamic data, and predefined layouts, developers can create powerful report systems without building UI elements manually.

If you are exploring modern reporting solutions and want to apply them in real-world applications, our company encourages continuous learning and high-quality engineering. For more information about our technology expertise or to discuss potential collaboration, please reach out through our official contact channels.

 

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

 

Reference Documentation

View More
TECH

December 2, 2025

SaaS, PaaS and IaaS — Understand the Models, Choose the Right One

As companies move toward cloud-based solutions, one question comes up again and again: Should we use SaaS, PaaS, IaaS, or stay On-Premises?

The answer depends on your team, your budget, your security needs, and how fast you need to deliver. Let’s break down each model in simple terms.

1. SaaS (Software as a Service)

You use a ready-made product hosted by someone else.
You do not install servers, manage infrastructure, or worry about upgrades.
You simply subscribe and use the app.
Examples: Gmail, Zoom, Slack, Salesforce

Pros

  • No IT setup required
  • Works immediately
  • Automatic updates and maintenance
  • Lower upfront costs

Cons

  • Limited customization
  • Vendor lock-in
  • Cost increases with number of users

Use SaaS when:

You need a solution that “just works” and don’t want to manage infrastructure.

2. PaaS (Platform as a Service)

You build your application, and the platform handles the environment.
PaaS gives you everything you need to develop: runtime, frameworks, databases, deployment tools, scaling, CI/CD.
You focus on coding.
The platform takes care of servers and OS.
Examples: Heroku, Google App Engine, Azure App Service

Pros

  • No IT setup required
  • No server or OS management
  • Strong automation and CI/CD support
  • Great for prototypes and MVPs

Cons

  • Limited control over underlying infrastructure
  • Can be expensive as the system grows
  • Locked to the platform ecosystem

Use PaaS when:

You want to deliver features quickly and don’t want to spend time on DevOps or server management.

3. IaaS (Infrastructure as a Service)

You rent cloud infrastructure—servers, storage, networking—and configure everything yourself.
Think of it as a virtual data center.
You choose CPU, memory, connectivity, OS, and deploy however you like.
Examples: AWS EC2, Azure Virtual Machines, Google Compute Engine

Pros

  • High flexibility
  • Supports any tech stack
  • Scales instantly
  • You control OS and application layer

Cons

  • Requires DevOps/cloud skills
  • Configuration takes time
  • Mismanagement can burn money fast

Use SaaS when:

You want control similar to On-Prem but don’t want to buy physical hardware.

Which one should you choose?

There is no universal “best” option. There is only what fits your needs.
  • SaaS: choose when you want convenience and minimal technical work.
  • PaaS: choose when you’re building an app and want to move fast.
  • IaaS: choose when you need flexibility and control.

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

View More
TECH

December 2, 2025

11 Common Mistakes Experienced Java Developers Make in Spring Boot

As an experienced developer proficient in Object-Oriented Programming (OOP), you understand the foundational principles of encapsulation, inheritance, polymorphism, and abstraction. Spring is, at its core, the most ambitious application of OOP principles ever built at scale.

You know how to designing clean, decoupled classes, apply SOLID principles and managing object lifecycles.

However, transitioning that expertise to the Java Spring Framework and Spring Boot environment often presents a unique set of pitfalls. Spring introduces its own powerful paradigms, primarily Inversion of Control (IoC) and Dependency Injection (DI), which fundamentally alter how objects are created, managed, and interact. The "Spring Magic" of auto-configuration, annotations, and conventions can sometimes lead to shortcuts that violate core OOP tenets or ignore the framework's best practices.

This blog post explores the most common mistakes developers, particularly those with a strong OOP background, make when diving into the Spring ecosystem. We will cover areas from misusing annotations to neglecting performance and security, providing actionable advice to write cleaner, more maintainable, and robust Spring applications.

The most frequent source of errors stems from a conceptual clash between manual OOP object creation and Spring's automated lifecycle management.

Mistake 1: Ignoring Dependency Injection (DI) and Manually Instantiating Objects

A fundamental OOP habit is using the new keyword to create an object when you need it. In Spring, this is a major anti-pattern for framework-managed components.

The Mistake:

Instead of letting Spring inject dependencies into a component developers might manually instantiate the service:

@Service
public class OrderService {
    private final PaymentService paymentService = new PaymentService(); // ← Death sentence
}

You’ve been trained for years to create instance from class. So you do it instinctively. The result?

  • The paymentService is not a Spring bean → no transaction management, no AOP, no proxying.
  • You cannot mock it in unit tests without PowerMockito or ugly workarounds.
  • You cannot swap implementations per profile (e.g., MockPaymentService in tests or SandboxPaymentService in staging).

The Fix:

Always rely on Spring's DI mechanism. Use Constructor Injection (the preferred method), Setter Injection, or Field Injection (using @Autowired).

Constructor Injection 1

@Service
public class OrderService {

    private final PaymentService paymentService;

    public OrderService(PaymentService paymentService) {
        this.paymentService = paymentService;
    }
}

Constructor Injection 2

@Service
@RequiredArgsConstructor // Lombok, or write constructor yourself
public class OrderService {
    private final PaymentService paymentService; // Constructor injection FTW
}

Setter Injection

@Service
public class OrderService {

private PaymentService paymentService;

    @Autowired
    public void setPaymentService(PaymentService paymentService) {
        this.paymentService = paymentService;
    }
}

Field Injection 


@Service

public class OrderService {

@Autowired
private PaymentService paymentService;

    public void placeOrder() {
        paymentService.processPayment();
    }
}

Misstake 2: Using @Autowired Without Qualifiers When Multiple Beans Exist

The mistake:

@Autowired
private PaymentService paymentService; // NoUniqueBeanDefinitionException at startup

Or the even worse variant:

@Autowired
private List<PaymentService> paymentServices; // You get all implementations in unknown order

Suddenly adding a new payment provider (say, Zalo Pay, Apple Pay) breaks production because the list order changed or the wrong one was injected.

The Fix (choose one):

  • Primary bean + @Primary
  • @Qualifier("stripePaymentService")
  • Better: Strategy pattern with a Map<String, PaymentProvider> injected and qualified by name
  • Best: Small focused interfaces instead of one fat PaymentService interface

Mistake 3: Misusing @Configuration and @Bean

Developers often struggle with when and how to define a "bean" outside of the standard component scanning.

The Mistake:

Defining a configuration method as @Bean within a class that isn't annotated with @Configuration (or an equivalent like @SpringBootApplication). Furthermore, they might accidentally create multiple instances of a singleton bean when it should be managed by the container.

The Fix:

A method annotated with @Bean is designed to be executed by a class annotated with @Configuration. This combination tells Spring, "When the application starts, run this method and register its return value as a singleton object (a bean) in the IoC container." Understand that most beans default to the singleton scope, aligning with the OOP practice of having a single point of control for certain resources.

Mistake 4: Logic Overload in Controllers

This is arguably the most common mistake that violates the Single Responsibility Principle (SRP).

The Mistake:

Developers, in an effort to speed up development, place business logic, complex data validation, or even direct database access logic inside the @RestController methods.

// Bad Practice: Controller doing too much
@RestController
public class UserProfileController {
    @Autowired private UserRepository repository;

    @PostMapping("/users")
    public ResponseEntity<User> createUser(@RequestBody User user) {
    // Business logic/validation here instead of Service layer
       if (user.getAge() < 18) {
            throw new InvalidUserException();
        }
      // Direct repository call
        repository.save(user);
      // ...
     }
}

The Fix:

Enforce strict layering.

  • Controller (@RestController): Only handles HTTP request/response mapping, request validation (e.g., using JSR-303 annotations like @Valid), and calling the appropriate Service layer method. It acts as the "gatekeeper."

  • Service (@Service): Holds all the business logic, transaction boundaries (@Transactional), and orchestrates calls to multiple Repositories. It's the "brain" of the application.

  • Repository (@Repository): Only handles direct data access operations (CRUD) against the persistence store. @Repository enables exception translation from SQLException → DataAccessException hierarchy. If you slap @Component on your JPA repositories, you lose that translation and end up with raw, unchecked exceptions bubbling up. 

Mistake 5: Missing or Misplaced Transactional Boundaries

The Mistake:

Two opposite extremes I see constantly:

A. Forgetting @Transactional entirely → no rollback on exceptions B. Putting @Transactional on every service method → huge transactions, table locks, deadlocks

Worse: self-invocation bypasses the proxy.

@Service
public class OrderService {
    public void createOrder() {
        validateOrder(); saveOrder();
        // No transaction here because of self-call!
    }
    @Transactional
    public void saveOrder() {
       ...
    }
}

The Fix:

  • Put @Transactional only on the public method that orchestrates the use case
  • Never call @Transactional methods from within the same class
  • If you must, extract to another Spring bean or use @Transactional(propagation = REQUIRES_NEW) carefully
  • Always set readOnly = true for query methods
  • Use @Transactional on class level only if literally every method needs it (rare)

Mistake 6: Not Understanding N+1 Queries and Fetch Strategies

The Mistake:

In MyBatis, the N+1 problem commonly happens when a developer defines a parent-child mapping where the child collection is loaded using another SQL query inside the <collection> tag.

<resultMap id="userResultMap" type="User">
    <id property="id" column="id"/>
    <result property="name" column="name"/>

    <!-- Causes N+1 queries -->
    <collection property="orders"
        ofType="Order"
        select="findOrdersByUserId"
        column="id"/>
</resultMap>

If you call

List<User> users = userMapper.findAllUsers();

MyBatis will execute: 1 query to retrieve all users + N queries to retrieve orders for each user

1 (parents) + N (children) = N+1 queries

The fix:

The most efficient way to avoid the N+1 problem in MyBatis is to replace nested selects with a single JOIN query, and let MyBatis map the flattened results.

<select id="findAllUsersWithOrders" resultMap="userOrderResultMap">
    SELECT u.id AS user_id,
        u.name,
        o.id AS order_id,
        o.product_name
FROM users u
LEFT JOIN orders o ON u.id = o.user_id


</select>

    <resultMap id="userOrderResultMap" type="User">
        <id property="id" column="user_id"/>
        <result property="name" column="name"/>
        <collection property="orders" ofType="Order">
        <id property="id" column="order_id"/>
        <result property="productName" column="product_name"/>
         </collection>
</resultMap>

  • Only one SQL call is executed.

  • MyBatis maps the parent-child relationships in memory.

  • No performance penalties from multiple round-trips to the database.

Mistake 7: Storing Sensitive Data in application.properties / application.yml

This is a critical security vulnerability.

The Mistake:

Hardcoding sensitive information such as database passwords, API keys, or cloud credentials directly into the application's configuration files. This is easily exposed if the source code is compromised or accidentally checked into a public repository.

The Fix:

Use Environment Variables or Externalized Configuration. Spring Boot is designed to read configuration properties from multiple sources, with environment variables taking precedence.

# Instead of:
# spring.datasource.password=myhardcodedsecret

# Use this in application.properties/yml:
spring.datasource.password=${DB_PASSWORD} 

Then, set the DB_PASSWORD environment variable on the server. For production, consider dedicated tools like Spring Cloud Config or secrets managers (e.g., AWS Secrets Manager, HashiCorp Vault).

Mistake 8: Leaking Entity Objects Out of the Service Layer

This violates encapsulation and can lead to unintended state changes.

The Mistake:

Returning the raw JPA @Entity objects directly  (or mybatis) from a Service layer method to the Controller, and then exposing them as the JSON response. This breaks the domain boundary. Furthermore, lazy-loaded collections on the Entity might be accessed outside of the transaction scope (e.g., in the Controller or during JSON serialization), leading to a dreaded LazyInitializationException.

The Fix:

Implement the Data Transfer Object (DTO) pattern. The Service layer should map the internal @Entity objects to an external DTO before returning it. The Controller only works with DTOs. This ensures encapsulation (internal data structure is protected) and prevents serialization errors.

Even though you don't have JPA’s LazyInitializationException or persistence-context issues, you still get:

Problem JPA MyBatis
LazyInitializationException ✔ Yes ❌ No
Dirty checking / automatic persistence ✔ Yes ❌ No
Leaking domain model ✔ Yes ✔ Yes
Coupling DB schema to API ✔ Yes ✔ Yes
Exposing sensitive/internal fields ✔ Yes ✔ Yes
Serialization recursion issues ✔ Yes ✔ Yes

The DTO pattern is still best practice for MyBatis as well as JPA

Mistake 9: Misusing @Async and Thread Pools

The Mistake:

Simply adding @EnableAsync and annotating a method with @Async without configuring a thread pool.
// Bad Practice: Relying on default behavior
@Service public class EmailService{      @Async public void sendEmail(String recipient){ // Expensive IO operation     }  }
The Fix:

Until you realize:

  • The Configuration (Avoiding OOM)

@Configuration
@EnableAsync
public class AsyncConfig {

    @Bean(name = "taskExecutor")
    public Executor taskExecutor() {
    // Fix: Default SimpleAsyncTaskExecutor creates a new thread per call → OOM in production.
    // We configure a proper ThreadPoolTaskExecutor to limit resource usage.
    ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();

        executor.setCorePoolSize(10);
        executor.setMaxPoolSize(50);
        executor.setQueueCapacity(500);
        executor.setThreadNamePrefix("AsyncThread-");
        executor.initialize();
        return executor;
    }
}

  • The Service (Handling Proxy & Exceptions)

@Service
public class EmailService {

    private static final Logger logger = LoggerFactory.getLogger(EmailService.class);

    // Rule 1: @Async methods must be public (for AOP proxying) and called from another bean.
    @Async("taskExecutor")
   public CompletableFuture<String> sendEmail(String recipient) {
         // Rule 2 & 3: Always return CompletableFuture<?>.
        // If we return 'void', exceptions are swallowed and lost.
        try {
             logger.info("Sending email to " + recipient);
             Thread.sleep(2000);

             if (recipient.contains("fail")) {
                   throw new RuntimeException("Email server timeout!");
            }

            return CompletableFuture.completedFuture("Email Sent Successfully");

       } catch (Exception e) {
           // Properly capture the error so the caller can handle it.
           // Otherwise, the exception is swallowed by the void return type.
           return CompletableFuture.failedFuture(e);
        }
   }
}

  • The Usage (Calling from another Bean)

@RestController
public class EmailController {

    @Autowired
    private EmailService emailService; // Injecting the bean (External call)

    @PostMapping("/send")
    public void send(@RequestParam String email) {
        // This is a valid call because it comes from the Controller bean to the Service bean.
       // If we called this.sendEmail() inside EmailService, the @Async would be ignored.
       emailService.sendEmail(email);
    }
}

Mistake 10: Inadequate or Generic Exception Handling

Poor exception handling leads to cryptic HTTP 500 errors and a terrible user experience.

The Mistake:

Catching generic exceptions (catch (Exception e)) in the service layer, suppressing specific exceptions, or allowing application exceptions to bubble up to the client, exposing internal implementation details.

The Fix:

Implement proper Global Exception Handling using the @ControllerAdvice and @ExceptionHandler annotations. This allows you to centralize error handling, map your custom, specific application exceptions (e.g., ResourceNotFoundException, InvalidInputException) to the correct HTTP status codes (e.g., 404, 400), and return a standardized, clean error response object (e.g., JSON payload).

Mistake 11: Ignoring Spring Security Basics

Security is often an afterthought, and developers fail to understand how Spring Security integrates with the application context.

The Mistake:

Not understanding the basics of the Spring Security Filter Chain, relying solely on annotations like @PreAuthorize without a configured authentication provider, or storing passwords in plaintext.

The Fix:

Every Spring Boot application should implement security from the start. Use a modern, strong password encoder (like BCryptPasswordEncoder), configure a custom UserDetailsService, and ensure you understand the flow: Request -> Filter Chain -> Authentication -> Authorization -> Dispatcher Servlet -> Controller. Even for non-critical endpoints, you should explicitly define them as public (e.g., a whitelist) and secure everything else by default.

Conclusion

The Java Spring Framework and Spring Boot offer immense power, allowing you to build scalable and robust applications rapidly. For developers with a strong OOP background, mastery lies in shifting control from manual object management to embracing Spring's core mechanisms: IoC, DI, and AOP.

To transition successfully and write clean, professional Spring code, focus on these non-negotiable best practices:

  • Stop Using new: Fully rely on Constructor Injection for all managed components, letting Spring manage object lifecycle and provide essential features (transactions, security, AOP).
  • Maintain Layer Discipline: Enforce strict SRP by keeping Business Logic strictly in the Service Layer and ensuring Controllers only handle request mapping.
  • Protect Your Domain: Never leak Entities; always use the DTO pattern for communication between the Service and Controller layers to maintain encapsulation and prevent persistence context issues.
  • Secure and Optimize: Prioritize security by externalizing sensitive secrets and optimize performance by resolving N+1 query issues using JOIN FETCH or appropriate mapping strategies.
  • Handle Errors Globally: Implement Global Exception Handling (@ControllerAdvice) to provide clean, standardized error responses and avoid exposing internal stack traces.

Mastering Spring is about understanding where the framework takes over. By embracing these idioms, you transform from a developer who uses Spring to one who thinks in Spring, resulting in applications that are cleaner, more maintainable, and built for scale.

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

Reference

[1] Spring Framework course. Top 10 Mistakes in Spring Boot Microservices and How to Avoid Them . Ramesh Fadatare . https://www.javaguides.net/2025/01/top-10-mistakes-in-spring-boot-microservices.html

[2] Spring Framework Documentation. (n.d.). 3.3. Dependency Injection. Retrieved November 30, 2025, from https://docs.spring.io/spring-framework/reference/core/beans/dependencies/factory-collaborators.html

[3] Mihalcea, V. (2023, August 11). The best way to use DTOs with JPA and Hibernate. Vlad Mihalcea. https://vladmihalcea.com/the-best-way-to-use-dtos-with-jpa-and-hibernate/

[4] Baeldung Team. (2024, March 25). The @Transactional Pitfall. Baeldung. https://www.baeldung.com/transactional-spring-proxies

[5] Top 10 Most Common Spring Framework Mistakes. Retrieved November 30, 2025, from https://www.geeksforgeeks.org/java/top-10-most-common-spring-framework-mistakes/

[6] Support from Chat GPT and Gemini

View More
TECH

December 2, 2025

Improving Application Performance with Laravel Queues

In modern web applications, performance and user experience are very important. Users always expect immediate responses to every action. However, some tasks such as sending emails, processing images or generating reports take a long time. If we perform them right in the process of responding to requests, the application will become slow.
The Queue is the solution to solve this problem. In this article, we will know what Laravel Queues is, why we should use it, and the simplest way to implement it.
 

 What is Queue?

Queues allow us to push time-consuming tasks to the background, instead of executing them immediately when a user sends a request.
Some examples of using Queues:
    •  Sending an email.
    •  Processing and resizing images when users upload.
    •  Sending notifications to multiple people.
    •  Generating PDF reports.
    •  Processing payments.

Why should we use Queues?

For example, when a user registers for an account, we want to send a welcome email.
Without Queues:
    1. Save user information.
    2. Create email content.
    3. Send an email.
    4. Return results to the browser.
If sending an email takes 3-5 seconds, users will have to wait a long time.
Using Queues:
    1. Save user information.
    2. Push the email request to the queue.
    3. Return results immediately to the user.
    4. Send an email in the background.
Here is much more effective!
 

How to implement Laravel Queues simply

Step 1: Configure Queues Driver

Laravel supports many types, such as database, Redis, Amazon SQS, etc
Here we use the database driver for the example.
In the .env file, edit the following line:
 
QUEUE_CONNECTION=database
 
Create the database table to hold the jobs by running the migration:
 
php artisan queue:table
php artisan migrate
 

Step 2: Create a Job

Run the command:
 
php artisan make:job SendWelcomeEmail
 
This will generate a class in app/Jobs/SendWelcomeEmail.php
In the file app/Jobs/SendWelcomeEmail.php, we add the email sending logic:
 
public function handle()
{
    Mail::to($this->user->email)->send(new WelcomeEmail($this->user));
}
 

Step 3: Dispatching Jobs

In Controller, when the user registers, we use the dispatch on the job itself. The arguments passed to the method will be set to the job's constructor
 
SendWelcomeEmail::dispatch($user);
 
At this time, the email sending job has been queued.
 

 Step 4: Run the Queue Worker

To start a queue worker and process jobs, we need to run the command below
 
php artisan queue:work
 
Worker is a long-lived process, so during deployment, it will not automatically recognize and adapt to changes in source code, so we need to restart it during deployment, with the command:
 
php artisan queue:restart
 
To keep the queue process running permanently in the background, we need to use a process monitor such as Supervisor in production to keep workers running.
 

Conclusion

The Queue is a powerful tool that helps us improve application performance and provide a smooth user experience. Time-consuming tasks such as sending emails, processing images, generating reports, etc., should be queued to run in the background.
If you are building a Laravel application and have not yet used Queue, you should try to implement it in your application. This will not only make your application run faster but also make it easier to extend and manage later.

Whether you need scalable software solutions, expert IT outsourcing, or a long-term development partner, ISB Vietnam is here to deliver. Let’s build something great together—reach out to us today. Or click here to explore more ISB Vietnam's case studies.

[References]

  1. https://laravel.com/docs/12.x/queues
  2. https://viblo.asia/p/gioi-thieu-ve-queues-trong-laravel-gAm5yqgD5db
  3. https://www.luckymedia.dev/blog/laravel-for-beginners-using-queues (Image source)
View More
TECH

December 2, 2025

WOVN.io: A Simple Way to Localize Your Website Without Code Changes

In today’s global web ecosystem, users expect content in their own language.
Whether you’re building a SaaS platform, e-commerce store, or admin dashboard, adding multilingual support can quickly become a major challenge — involving i18n libraries, translation files, and ongoing maintenance.

WOVN.io solves this problem with a lightweight and automated approach: it localizes your website or web app in minutes, without changing your existing code.

What is WOVN.io?

WOVN.io is a SaaS localization service that automatically translates your website or web app into multiple languages.
It detects the visitor’s browser language and displays the appropriate translation — all managed from a simple web dashboard.

You can use machine translations, human-reviewed content, or a combination of both. It’s designed to work seamlessly with any framework, including single-page applications (SPA) like React, Vue, or Next.js.

Key Benefits

  • Quick setup – integrate with just one line of script.

  • No refactoring required – works with your existing HTML and routing.

  • Centralized translation management – manage translations from WOVN’s dashboard.

  • SEO-friendly multilingual URLs – automatically generates /en, /ja, /fr, etc.

  • Automatic & manual translation options – combine AI speed with human accuracy.

Installation & Integration

1. Sign up for WOVN.io

Go to https://wovn.io and create an account.
Once you set up a project, WOVN will generate a unique script snippet for you, for example:

<script src="//j.wovn.io/1" data-wovnio="key=your_project_key"></script>

2. Add the script to your project’s index.html file

Open your project’s public/index.html file and insert the script inside the <head> tag:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>My App</title>
    <!-- WOVN.io script -->
    <script src="//j.wovn.io/1" data-wovnio="key=your_project_key"></script>
  </head>
  <body>
    <div id="root"></div>
  </body>
</html>

3. Run and test

Start your project as usual:

yarn start

WOVN will automatically detect the user’s browser language and load the translated content.
You’ll also see a language switcher appear (added automatically by WOVN).

Why Use WOVN.io?

  • Instant multilingual setup – go global in minutes.

  • Improved international SEO – optimized localized URLs for each language.

  • Simplified maintenance – manage translations outside your codebase.

  • Scalable localization – perfect for SaaS, marketing sites, and e-commerce.

  • Supports both static and dynamic content through WOVN’s API.

Conclusion

If you’re expanding your product globally and need a fast, no-hassle way to localize your website, WOVN.io is the perfect solution.
With just one line of code, your app can speak to the world — literally.

If you're seeking a reliable, long-term partner who values collaboration and shared growth, feel free to reach out to us here: Contact ISB Vietnam

[References]

View More
1 2 3 4 5 6 23
Let's explore a Partnership Opportunity

CONTACT US



At ISB Vietnam, we are always open to exploring new partnership opportunities.

If you're seeking a reliable, long-term partner who values collaboration and shared growth, we'd be happy to connect and discuss how we can work together.

Add the attachment *Up to 10MB