TECH

December 7, 2024

Introduction About SOAP API

SOAP (Simple Object Access Protocol) is a protocol used to exchange structured information between systems over a network. It is based on XML and provides a way for applications to communicate using standard messaging formats. SOAP was designed with a focus on reliability, security, and extensibility, making it an excellent choice for enterprise-level applications. Despite being older than other web service protocols like REST, SOAP is still widely used in critical systems that require robust features.

What is SOAP?

SOAP is a protocol that defines a set of rules for structuring messages and allows communication between applications over different platforms and programming languages. A SOAP message is always an XML document, and it follows a strict structure that includes an envelope, header, body, and optionally, a fault element for error handling.

Key components of a SOAP message:

  • Envelope: The outermost part of the message, which contains all other elements.
  • Header: Contains metadata, such as authentication or routing information.
  • Body: The main content of the message, where the actual data is stored.
  • Fault: A part of the message for reporting errors, useful for debugging and issue resolution.

SOAP can work over various transport protocols like HTTP, SMTP, or JMS, and it is known for its reliability and security features, making it suitable for complex, transactional, and high-security applications.

When to use SOAP?

SOAP is particularly suited for scenarios that require high levels of security, reliability, and formal contracts between client and server. Here are some specific cases when SOAP is the ideal choice:

  1.  Enterprise Systems: SOAP is perfect for large-scale, mission-critical applications in industries such as banking, finance, or healthcare, where security and data integrity are essential. For example, SOAP is often used in payment processing systems, where transactions must be secure, reliable, and auditable.
  2. Transactional Systems: SOAP supports ACID (Atomicity, Consistency, Isolation, Durability) properties, making it ideal for applications that require guaranteed message delivery, such as financial transactions, stock trading systems, and order processing systems.
  3. Systems with Complex Security Requirements: SOAP has built-in security standards like WS-Security for message encryption, integrity, and authentication. This makes it suitable for applications in sectors such as government, healthcare, or defense, where data confidentiality and security are paramount. For example, SOAP is used in systems where encrypted communication is needed for the transmission of sensitive data.

Advantages of SOAP

  • High Security: SOAP supports WS-Security, which includes features like encryption, authentication, and message integrity, making it ideal for sensitive data transmission.
  • Reliability: SOAP supports WS-ReliableMessaging, ensuring that messages are delivered reliably, even in the event of network failure.
  • Extensibility: SOAP is highly extensible, allowing developers to build additional features such as transaction management, security, and messaging patterns.
  • Error Handling: SOAP has a built-in error-handling mechanism through the <fault> element, making it easier to identify and resolve issues in communication.
  • Formal Contracts: SOAP services are often described using WSDL (Web Services Description Language), which defines the service's structure and operations, ensuring that both the client and server understand the contract.

Disadvantages of SOAP

  • Complexity: SOAP messages are verbose due to their XML-based format, making them more complex and harder to work with compared to simpler protocols like REST.
  • Performance: The XML format adds overhead, making SOAP less efficient than other protocols, especially when large volumes of data need to be transferred.
  • Limited Flexibility: SOAP is rigid in its structure and requires developers to adhere to its strict rules, making it less flexible compared to REST, which is more lightweight and adaptable.

Comparing SOAP with REST

To better understand the differences between SOAP and REST, here is a quick comparison in a table format:

Feature SOAP REST
Protocol vs. Style

SOAP is a protocol with strict rules

REST is an architectural style, not a protocol

Data Format

XML

Typically JSON (but can also be XML)

Security

Built-in security (WS-Security)

Relies on HTTPS for security

Error Handling

Detailed error handling with <fault> element

Custom error messages via HTTP status codes

Performance

Slower due to XML overhead

Faster and more efficient with JSON

Stateful/Stateless

Can be stateful or stateless

Stateless by design

Ease of Use

More complex and harder to implement

Simpler to implement and easier to use

Use Case

Enterprise systems, financial transactions, healthcare

Web and mobile applications, lightweight services

 

Demo Example: SOAP Request for Weather Service

<?php
    $wsdl = "http://www.webserviceX.NET/WeatherService.asmx?WSDL";

    $client = new SoapClient($wsdl);

    $params = array(
        'CityName' => 'Ho Chi Minh',
        'CountryName' => 'Viet Nam'
    );

    try {
        $response = $client->__soapCall('GetWeather', array($params));

        echo "Weather Information: ";
        var_dump($response);
    } catch (SoapFault $e) {
        echo "Error: " . $e->getMessage();
    }
?>

 

Conclusion

SOAP remains a powerful option for applications that require robust security, reliability, and compliance with strict standards. Its use in industries such as finance, healthcare, and government proves its importance in scenarios where data integrity, encryption, and transaction management are essential.

 

References

View More
TECH

December 4, 2024

Some tips for jQuery performance improvement

jQuery is a popular Javascript library that developers often use for client-side development. Improving performance when working with jQuery involves understanding its best practices and optimizing your code for efficiency. Here are some tips along with sample code snippets to illustrate:

  1. Cache jQuery Objects:

    Instead of repeatedly querying the DOM for the same elements, cache them in variables.

    For example:

    $('.myElement').css('color', 'red');

    $('.myElement').addClass('highlight');

    Should be changed to:

    var $myElements = $('.myElement');

    $myElements.css('color', 'red');

    $myElements.addClass('highlight');

  2. Using Chaining

    When a DOM element undergoes a change, it allows for the chaining of similar object references into groups for execution. This enables the reuse of existing jQuery objects, eliminating the need for repetitive creation, improve performance.

    For example:

    $('#myContents').addClass('active');

    $('#myContents').css('border', '1px solid');

    $('#myContents').css('background-color', 'red');

    Should be changed to:

    $('#myContents').addClass('active').css('border', '1px solid').('background-color', 'red');

  3. Use Efficient Selectors

    jQuery selectors can be slow, especially complex ones. Use efficient selectors like IDs or classes with a tag name.

    For example:

    $('ul li[data-category="books"]:first-child');

    Should be changed to:

    $('#books-list li:first-child');
  4. Event Delegation

    Use event delegation for handling events on dynamically added elements. Attach event listeners to a parent container rather than individual elements.

    For example:

    $('.list-item').click(function() {
    // Handle click event
    });

    Should be changed to:

    $('.list-container').on('click', '.list-item', function() {
    // Handle click event
    });
  5. Use .on() instead of Shortcut Methods

    $.on() is more versatile and performs better than shortcut methods like .click(), .hover(), etc., especially when binding multiple events.

    For example:

    $('.button').click(function() {
    // Click handler
    });

    Should be changed to:

    $('.button').on('click', function() {
    // Click handler
    });
  6. Use .prop() for Boolean Attributes

    When dealing with boolean attributes like checked, use .prop() instead of .attr() for better performance.

    For example:

    $('input[type="checkbox"]').attr('checked', true);

    Should be changed to:

    $('input[type="checkbox"]').prop('checked', true);
  7. Minimize DOM Access in Loops

    If you're iterating over a collection of elements, cache your selections outside the loop to avoid repeated DOM queries.

    For example:

    $('.list-item').each(function() {
    $(this).addClass('processed');
    });

    Should be changed to:

    var $listItems = $('.list-item');
    $listItems.each(function() {
    $(this).addClass('processed');
    });

    By following these tips, you can significantly improve the performance of your jQuery code, making it faster and more efficient. Hope this helps!

    References:

    https://jquery.com/

    https://stackoverflow.com/questions/30672695/how-to-optimize-jquery-performance-in-general

    https://stackoverflow.com/questions/30672695/how-to-optimize-jquery-performance-in-general

    Image source: https://www.freepik.com/free-photo/html-css-collage-concept-with-person_36295465.htm

View More
TECH

December 4, 2024

Mocking Service Worker: API mocking library for Front-end development

A Mocking Service is a tool or technique used in front-end development to simulate back-end services during the development or testing phases. Instead of relying on actual APIs, a mocking service mimics the behavior of a real server by providing predefined responses to API requests. It allows front-end developers to work independently from back-end development. Even if the back-end API is not ready, front-end teams can continue building and testing their features using mock data.

View More
TECH

December 4, 2024

CakePHP 4: How to Create a Token-Based Login Function.

In the article on [CakePHP 4: How to Create a Login Function]. I have guided on how to create a login function using a web interface by combining Session and Cookie. In this article, I will guide you on how to create a token-based login and authentication function.

The steps to install the Authentication 2.0 plugin, create a controller, model, etc., have been guided in the article [CakePHP 4: How to Create a Login Function]. If you haven't read it, you can refer to the article at https://isb-vietnam.net/cakephp-4-how-to-create-a-login-function/.

1, Understand Token-Based Authentication.

To understand how the login function works, you can see the diagram below:

In a token-based application, The server will create a JWT and send the JWT back to the client when the user logs in. In subsequent requests, the client will send requests to the server with the JWT included in the header. The server will check if the JWT is valid to respond.

2, Implement JWT in CakePHP 4.

By default the JwtAuthenticator uses HS256 symmetric key algorithm and uses the value of Cake\Utility\Security::salt() as encryption key. For enhanced security one can instead use the RS256 asymmetric key algorithm.

Create encryption key.

# generate private key

openssl genrsa -out config/jwt.key 1024

# generate public key

openssl rsa -in config/jwt.key -outform PEM -pubout -out config/jwt.pem

The jwt.key file is the private key and should be kept safe. The jwt.pem file is the public key. This file should be used when you need to verify tokens created by external applications, eg: mobile apps.

Implement.

In src/Application.php, change the source code to identify the user based on the subject of the token by using JwtSubject identifier, and configures the Authenticator to use public key for token verification.

public function getAuthenticationService(ServerRequestInterface $request): AuthenticationServiceInterface

{

$service = new AuthenticationService([

           'unauthenticatedRedirect' => Router::url('/login'),

           'queryParam' => 'redirect',

       ]);

//…

$service->loadIdentifier('Authentication.JwtSubject');

$service->loadAuthenticator('Authentication.Jwt', [

           'header' => 'Authorization',

           'secretKey' => file_get_contents(CONFIG . 'jwt.pem'),

           'algorithm' => 'RS256',

           'returnPayload' => false

       ]);

  return $service;

   }

In UsersController.php, create a login function as below.

public function login()

   {

       if ($this->request->is('post')) {

           $result = $this->Authentication->getResult();

           if ($result->isValid()) {

               $privateKey = file_get_contents(CONFIG . 'jwt.key');

               $user = $result->getData();

               $payload = [

                   'iss' => 'myapp',

                   'sub' => $user->id,

                   'exp' => time() + 3600, // 1 hour

               ];

               $json = [

                   'token' => JWT::encode($payload, $privateKey, 'RS256'),

               ];

           } else {

               $this->response = $this->response->withStatus(401);

               $json = [

                   "message" => __('The Username or Password is Incorrect')

               ];

           }

           $this->RequestHandler->renderAs($this, 'json');

           $this->response->withType('application/json');

 

           $this->set(['json' => $json]);

           $this->viewBuilder()->setOption('serialize', 'json');

       } else {

           throw new NotFoundException();

       }

   }

Testing.

We create APIs to verify the token-based login function with the rules as shown in the table below:

Title endpoints remark
Retrieve All Articles (GET) /api/articles.json Can only be used by all users.
Users can see information about articles and the total number of likes for each article.
Create an Article (POST) /api/articles.json Can only be used by authenticated users.

 

Perform login testing using Postman. In case of a successful login, a token will be returned.

 

Call API Retrieve All Articles (GET), Can only be used by all users.

Call API Create an Article (POST), can only be used by authenticated users.

The result is returned successfully with the token attached in the header.

The result is returned unsuccessfully because the token is not attached in the header.

You can find the complete source code at: https://github.com/ivc-phampbt/cakephp-authentication

Conclusion

In this article, I introduced a new login method called Token-Based Authentication and how web applications authenticate using tokens. I hope this article provides knowledge to help you build token-based login features more easily.

References

View More
TECH

December 1, 2024

Web Programming Series - Cypress - E2E Testing Tool

In the web development field, delivering bug-free user experience is critical.
So, Testing has become an integral part of the development lifecycle, ensuring the stability and reliability of applications before releases.
One tool that developers and testers usually trust is Cypress.
In this blog, we’ll explore what Cypress is, why it stands out in the testing landscape, and how to get started with this tool.
 

View More
TECH

December 1, 2024

What is Supabase? How to Automatically Sync Data Across Systems Using Supabase Triggers

Supabase Overview:

Supabase is an open-source Backend-as-a-Service (BaaS) platform that aims to simplify the development process for modern applications. It leverages PostgreSQL as its core database system, which is known for its scalability, flexibility, and feature richness. Supabase offers an easy-to-use interface for developers to quickly build applications without the overhead of managing infrastructure. It is gaining traction worldwide, with a notable presence in markets like North America, Europe, Asia, and South America.

As an open-source alternative to Firebase, Supabase provides features such as authentication, real-time data syncing, file storage, and cloud functions. What makes Supabase stand out is its use of PostgreSQL, allowing developers to access the full power of a relational database while also benefiting from serverless capabilities.

Key Features of Supabase:

  1. Database: Fully managed PostgreSQL database with an API for fast development.
  2. Authentication: Secure user management with support for multiple providers.
  3. Realtime: Built-in real-time updates using WebSockets.
  4. Edge Functions: Serverless functions that run close to the user for low-latency performance.
  5. File Storage: Scalable storage solution for managing user files such as images and documents.
  6. Extensibility: Easy integration with third-party libraries and services.

In summary, Supabase is a powerful solution for developers looking for a fully managed, open-source backend platform that combines the reliability of PostgreSQL with modern tools to simplify application development.

PostgreSQL in Supabase

PostgreSQL powers Supabase and serves as the backbone of the database services. Unlike other BaaS platforms, Supabase allows you to interact directly with the PostgreSQL database, giving developers complete control over their data while benefiting from PostgreSQL's advanced features.

Tools and Features for PostgreSQL in Supabase:

  • Auto-Generated APIs: Supabase automatically generates RESTful APIs for your database tables, views, and functions, which eliminates the need for manual backend code.
  • Realtime Engine: WebSocket support for streaming changes to your database in real time.
  • Authentication Integration: Integrates PostgreSQL with Supabase's authentication service to manage access control securely.
  • Dashboard and SQL Editor: A user-friendly interface to manage the database, execute queries, and monitor performance.
  • Storage and Edge Functions: Extend PostgreSQL’s functionality with file storage and serverless edge functions.

By providing these tools, Supabase simplifies working with PostgreSQL while retaining all the power and flexibility of the underlying database.

How to use Supabase Database

Supabase Database is a powerful tool that allows you to build and manage databases with ease. Here's an enhanced step-by-step guide to help you get started and implement triggers and functions for advanced functionality.

1. Create a New Project in Supabase:

    • Start by creating a new project on Supabase

    • Add the necessary details, such as the project name and region. Please follow the images below.

form_create_new_project

    • When a project is created successfully, it will display essential information, including security credentials, configuration details, and access guidelines, to ensure proper setup and secure usage. 

create_prj_success_supabase.png

2. Create Database Tables: 

    • To create the users and orders tables in Supabase, follow the steps below:
    • Example queries:
      • Create the Users Table: Use the following SQL query to create a users table with essential columns such as user_id, user_name, and other relevant details.
        • user_id:  A primary key that is automatically generated for each user.
        • user_name: The name of the user (required).
        • email: The email address of the user, which must be unique.
        • age: The age of the user (optional).
        • timestamps: The created_at and updated_at fields automatically store the current UTC time for tracking record creation and updates.

queries_create_user_table.png

      • Create the Orders Table: Next, use this SQL query to create an orders table that will store information about each order, including a foreign key linking it to the users table.
        • order_id: A primary key automatically generated for each order.
        • user_id: A foreign key referencing the user_id from the users table, establishing a relationship between the two tables. The ON DELETE CASCADE ensures that when a user is deleted, their associated orders are also deleted.
        • order date: The date and time when the order was placed, stored in UTC format for consistency.
        • total price: The total price of the order, a required field ensuring no order is recorded without a price.
        • status: The current status of the order, defaulting to "pending" if not explicitly specified.
        • timestamps: The created_at and updated_at fields automatically store the current UTC time for each record, ensuring accurate tracking of record creation and updates.

queries_create_orders_table.png

3. Insert sample data: Once the tables are created, you can insert sample data into the users and orders tables.

insert_data_user_table

insert_data_order_table

4. Verification: After inserting data, you can query the tables to verify that the records have been added successfully. Supabase provides a powerful Schemas Visualizer and Table Editor to assist developers in managing and visualizing their database schema and structure without the need to manually write complex SQL queries. You can also use these tools to preview the data. 


schemas_visualizer_supabase

table_editor_supabase

How to Automatically Sync Data Using Supabase Triggers

In serverless environments like Firebase, you often handle business logic on the client side. While this approach works, it can lead to complex client-side code. Supabase, on the other hand, allows you to implement server-side business logic directly in the database using Triggers and Functions. These features enable automatic data synchronization across systems without changing client-side code.

Scenario: Adding User Name to Orders Table

Imagine you have a relational database with users and orders tables, and you want to add the user's name to each order. The goal is to automatically populate the user_name column in the orders table whenever a new order is placed, without requiring any changes to the client-side code.

Example User Story:

Title: View User Name Data in Order Tables
As an operator managing the project,
I want to view the user name data in the order tables,
so that I can easily query and analyze data related to orders and their associated users.

Acceptance Criteria:

  • The user_name column is included in the orders table.
  • The displayed user name matches the user who placed the order.
  • User names are fetched dynamically via a relationship with the users table.
  • Operators can filter and sort orders by user name.
  • The feature should not degrade performance.

Technical Notes:

  • Add a new user_name column to the  orders table.
  • Use a foreign key relationship between orders and  users to populate the user_name field.

Priority: Medium

This feature enhances usability for operators but may not directly impact end-user experience.

Dependencies

  • Database schema adjustments for the "Orders" table.

Definition of Done (DoD)

  • The "Orders" table includes a "User Name" field populated correctly.
  • Operators can filter and query data by user name.
  • All unit, integration,... tests pass.
  • Documentation updated for the new feature.

Steps for implementing Supabase triggers and functions to fulfill the user story above:

1. Add the user_name column to the orders table: 

You can add the user_name column using the following SQL with default value is empty string.

add_user_name_column

2. Populate the user_name column:

Populate the user_name column by fetching the corresponding name from the users table. You can use an update query:

populate_user_name

3. Create a Database Function:

Create a function to insert the user's name into the orders table when a new order is added. This can be done in the Functions section of Supabase.

In the Database Management section, please select Functions and then create a new function.

create_new_functions_supabase

Make sure to complete all the fields in the form to create a new function. This trigger function updates the user_name column in the orders table with the corresponding name from the user table, based on the user_id of the newly inserted order. It ensures that each new order record has the correct user_name associated with it.

create_new_functions_supabase_by_form

4. Create a Trigger to Call the Function:

Ensure that you complete all fields in the form accurately when setting up a new trigger. Note that the trigger name should not contain spaces or whitespace. Configuration Details for the Conditions to Fire Trigger section:

  • Table: This is the specific table that the trigger will monitor for changes. It is important to note that a trigger can only be linked to one table at a time. For this task, select the orders table.

  • Events: Specify the type of event that will activate the trigger. For this scenario, choose the event that corresponds to inserting new records into the orders table.

  • Trigger Type:

    • AFTER Event: The trigger will activate after the operation has been completed. This is useful for scenarios where you need to ensure that the primary operation has been executed before the trigger runs.
    • BEFORE Event: The trigger fires before the operation is attempted. This can be useful for pre-validation or modifying data before the main operation occurs.
  • Operation: The specific operation being monitored in this context is the insertion of new records into the orders table.

  • Orientation:

    • Row-Level: The trigger will activate once for each row that is processed.
    • Statement-Level: The trigger will activate once per statement, regardless of the number of rows affected.

add_new_trigger_1

add_new_trigger_2

Trigger successfully created, as shown in the image below.

trigger_created

 

5. Testing

Insert a new record into the orders table and check if the user_name column is populated automatically. 

insert_new_record_into_orders_tb

To check if the user_name column is populated after running an insert statement, you can use the following SQL code. This combines the two queries: one to insert a record and another to verify the contents of the last inserted record.

query_orders_last_record

The result of the select statement showed that the user_name column is populated automatically based on the user_id.

final_result

Finally, the trigger functions as expected, ensuring that the user story is successfully implemented and completed within the development process.

Conclusion

Supabase provides a powerful and flexible platform for building applications with real-time data synchronization. In this post, we discussed how to use Supabase triggers to automate data updates across systems, enhancing your application's responsiveness and reducing the need for manual data management. We demonstrated how to set up and test triggers to ensure they work as expected, so your data remains consistent and current. By implementing Supabase triggers, developers can focus on building features rather than worrying about data synchronization, leading to a more seamless and efficient development process. This solution makes it easier to manage complex workflows, ensuring that your application scales smoothly and operates efficiently."

Reference

https://supabase.com/

 

View More
TECH

November 28, 2024

Calling REST API From SQL Server Stored Procedure

Besides the usual way of calling API from Website or Application, we can call API from SQL Server Stored Process. In this post, I would like to introduce how to call an API from a SQL Server stored procedure by a few steps.

SQL Server doesn't have built-in functionality to directly make HTTP requests, so you'll typically use SQL Server's sp_OACreate and related procedures to interact with COM objects for HTTP requests.

Example using sp_OACreate
Here's a simplified example of how you might use sp_OACreate to call an API from a stored procedure. Please note that this approach relies on the SQL Server's ability to interact with COM objects and may be limited or require additional configuration.

Steps:

1. Enable OLE Automation Procedures:

Before using sp_OACreate, you need to make sure that OLE Automation Procedures are enabled on your SQL Server instance.

EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'ole automation procedures', 1;

RECONFIGURE;

2. Create the Stored Procedure

Here's an example stored procedure that performs a simple HTTP GET request to an API endpoint.

CREATE PROCEDURE CallApiExample
AS
BEGIN
DECLARE @object INT;
DECLARE @responseText VARCHAR(5000); -- Shoudn't use VARCHAR(MAX)
DECLARE @url VARCHAR(255) = 'https://northwind.vercel.app/api/categories'; -- Replace with your API URL
DECLARE @status INT;
 
-- Create the XMLHTTP object
EXEC sp_OACreate 'MSXML2.XMLHTTP', @object OUTPUT;
 
-- Open the HTTP connection
EXEC sp_OAMethod @object, 'open', NULL, 'GET', @url, 'false';
 
-- Send the request
EXEC sp_OAMethod @object, 'send';
 
-- Get the response text
EXEC sp_OAMethod @object, 'responseText', @responseText OUTPUT;
 
-- Check the status
EXEC sp_OAMethod @object, 'status', @status OUTPUT;
 
-- Get the response text
IF((SELECT @ResponseText) <> '')
BEGIN
DECLARE @json NVARCHAR(MAX) = (Select @ResponseText)
PRINT 'Response: ' + @json;
SELECT *
FROM OPENJSON(@json)
  WITH (
id INTEGER '$.id',
description NVARCHAR(MAX) '$.description',
name NVARCHAR(MAX) '$.name'
   );
END
ELSE
BEGIN
DECLARE @ErroMsg NVARCHAR(30) = 'No data found.';
PRINT @ErroMsg;
END
 
-- Clean up
EXEC sp_OADestroy @object;
END;


3. Execute the Stored Procedure

Run the stored procedure to see the output:

EXEC CallApiExample;

Result from API:

Result after executing the stored procedure:

Detailed Explanation:
sp_OACreate: This procedure creates an instance of a COM object. Here, 'MSXML2.XMLHTTP' is used to create an object that can make HTTP requests.

sp_OAMethod: This procedure calls methods on the COM object. In this example:

'open' sets up the request method and URL.

'send' sends the HTTP request.

'responseText' retrieves the response body.

'status' retrieves the HTTP status code.

sp_OADestroy: This procedure cleans up and releases the COM object.

 

Considerations:

- Security: Using OLE Automation Procedures can pose security risks. Ensure your SQL Server instance is properly secured and consider using more secure methods if available.

- Error Handling: The example doesn't include detailed error handling. In production code, you should handle potential errors from HTTP requests and COM operations.

- Performance: Making HTTP requests synchronously from SQL Server can impact performance and scalability.

- SQL Server Versions: OLE Automation Procedures are supported in many versions of SQL Server but may be deprecated or not available in future versions. So, please check your version's documentation for specifics.

References:

https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/ole-automation-stored-procedures-transact-sql?view=sql-server-ver16
https://stackoverflow.com/questions/22067593/calling-an-api-from-sql-server-stored-procedure
https://blog.dreamfactory.com/stored-procedures-data-integration-resty-performance/
https://mssqlserver.dev/making-rest-api-call-from-sql-server
Image source: https://www.freepik.com/free-photo/application-programming-interface-hologram_18098426.htm

 

View More
TECH

November 28, 2024

Exploring TypeScript: The Future Language for JavaScript Programming

In the dynamic world of web development, JavaScript has long been the go-to language for building interactive and dynamic web applications. However, as applications grow in complexity, managing large codebases and ensuring code quality with plain JavaScript can become challenging. Enter TypeScript, a powerful superset of JavaScript that addresses these challenges by adding static type-checking and other robust features.

What is TypeScript?

TypeScript is an open-source programming language developed by Microsoft. It builds on JavaScript by introducing static typing, classes, and interfaces, among other features, making it easier to write and maintain large-scale applications. Essentially, TypeScript is JavaScript with additional tools to catch errors early and enhance the development process.

Why Use TypeScript?

  • Early Error Detection: TypeScript's static type system allows developers to catch errors at compile time rather than at runtime. This means you can identify and fix issues before your code even runs, significantly reducing the number of bugs.
  • Enhanced Maintainability: As projects grow, maintaining code can become cumbersome. TypeScript's type annotations and interfaces make the code more readable and self-documenting, which simplifies maintenance and collaboration.
  • Improved Tooling: TypeScript provides powerful tools such as IntelliSense, which offers intelligent code completion, parameter info, and documentation on the fly. This improves developer productivity and reduces the likelihood of errors.
  • Interoperability with JavaScript: TypeScript is designed to be fully compatible with existing JavaScript codebases. You can gradually introduce TypeScript into your project, converting files one at a time without disrupting the entire codebase.

Basic Structure of TypeScript

TypeScript syntax is very similar to JavaScript, with additional features for static typing and more. Here are some key elements:

  • Type Annotations: Define variable types to catch errors early.
let isDone: boolean = false;
let total: number = 10;
let name: string = "TypeScript";
  • Interfaces: Define complex types and enforce structure.
interface Person {
    name: string; age: number;
}
let user: Person = {
    name: "John", age: 25
};
  • Classes: Support object-oriented programming with features like inheritance and encapsulation.
class Greeter {
    greeting: string;
    constructor(message: string) {
        this.greeting = message;
    }
    greet() {
        return "Hello, " + this.greeting;
    }
}
 
let greeter = new Greeter("world");
console.log(greeter.greet()); // => Hello world
  • Generics: Write reusable and flexible components.
function identity<T>(arg: T): T {
    return arg;
}
let output = identity<string>("myString");
let numberOutput = identity<number>(100);

Getting Started with TypeScript

To start using TypeScript, you need to install the TypeScript compiler (tsc) via npm (Node Package Manager). Open your terminal and run the following command:

npm install -g typescript

Once installed, you can compile TypeScript files into JavaScript using the tsc command:

tsc file.ts

This will generate a corresponding file.js that you can run in any browser or Node.js environment.

Key Features of TypeScript

  • Static Typing: TypeScript allows you to define types for variables, function parameters, and return values. This helps prevent type-related errors and improves code clarity.
  • Type Inference: Even without explicit type annotations, TypeScript can often infer the type of a variable based on its value or how it is used.
  • Type Declarations: TypeScript allows you to create type definitions for libraries or frameworks that are not written in TypeScript, enabling better integration and development experience.
  • ES6 and Beyond: TypeScript supports many modern JavaScript features, such as async/await, destructuring, and template literals, even if they are not yet available in the target JavaScript environment.

Conclusion

TypeScript not only improves code quality and maintainability but also enhances developer productivity through better tooling and early error detection. Its compatibility with JavaScript allows for a smooth transition and incremental adoption. As web applications continue to grow in complexity, TypeScript emerges as a powerful ally for developers aiming to write clean, reliable, and scalable code.

References:
https://www.typescriptlang.org/docs
https://smachstack.com/how-to-work-ts ( Image source )

View More
TECH

November 28, 2024

Some tips to improve performance of LINQ in C#

Improving performance with LINQ in C# is essential, especially when working with large datasets. LINQ provides a powerful and expressive way to query data, but it can introduce performance overhead if not used efficiently. Below are some tips and tricks to improve LINQ performance, along with sample code:

1. Avoid repeated Enumeration

When you execute a LINQ query, it can be enumerated multiple times, leading to unnecessary performance hits.

You can improve performance by materializing the result of a query (e.g., using ToList(), ToArray(), or ToDictionary()).

var data = GetData(); // Some large collection

// Not good: Repeatedly enumerating the sequence
var count = data.Where(x => x.IsActive).Count();
var sum = data.Where(x => x.IsActive).Sum(x => x.Value);

// Good: Materializing the result to avoid repeated enumeration
var activeData = data.Where(x => x.IsActive).ToList();
var count = activeData.Count;
var sum = activeData.Sum(x => x.Value);

2. Use Any()  instead of Count() > 0

If you're only checking whether a collection contains any elements, using Any() is faster than Count() > 0.

Any() stops as soon as it finds the first matching element, whereas Count() counts all elements before returning a result.

// Not good: Counting all elements
if (data.Where(x => x.IsActive).Count() > 0) { ... }

// Good: Checking for any element
if (data.Where(x => x.IsActive).Any()) { ... }

3. Use FirstOrDefault() and SingleOrDefault()

When you expect only one element or none, use FirstOrDefault() or SingleOrDefault() instead of Where() combined with First() or Single().

These methods are optimized for single element retrieval.

// Not good: Using Where with First
var item = data.Where(x => x.Id == 1).FirstOrDefault();

// Good: Using FirstOrDefault directly
var item = data.FirstOrDefault(x => x.Id == 1);

4. Use OrderBy and ThenBy efficiently

If you need to sort data, make sure that you're sorting only what is necessary, as sorting can be an expensive operation. Additionally, try to minimize the number of sorting operations.

// Not good: Multiple OrderBy statements
var sortedData = data.OrderBy(x => x.Age).OrderBy(x => x.Name);

// Good: Using OrderBy and ThenBy together
var sortedData = data.OrderBy(x => x.Age).ThenBy(x => x.Name);

5. Optimize GroupBy

The GroupBy operator can be expensive, especially if you're grouping large collections. If you need to perform a GroupBy but only need to count or get the First/Last element in each group, avoid creating the entire group and just perform a more efficient aggregation.

// Not good: GroupBy followed by a complex operation
var grouped = data.GroupBy(x => x.Category)
                  .Select(g => new { Category = g.Key, Count = g.Count() })
                  .ToList();

// Good: Perform aggregation more directly
var counts = data.GroupBy(x => x.Category)
                 .Select(g => new { Category = g.Key, Count = g.Count() })
                 .ToDictionary(g => g.Category, g => g.Count);

6. Prefer IEnumerable<T> over List<T> when possible

LINQ queries work best with IEnumerable<T> because it represents a lazy sequence.

Converting it to a List<T> immediately could result in unnecessary memory usage if not required.

// Not good: Convert to List too early
var result = data.Where(x => x.IsActive).ToList();

// Good: Keep it as IEnumerable until it's really needed
IEnumerable<int> result = data.Where(x => x.IsActive);

 

Hoping with these tips, you can significantly improve the performance of your LINQ queries in C#.

References:

https://www.bytehide.com/blog/linq-performance-optimization-csharp

Image source: https://www.freepik.com/free-photo/top-view-laptop-table-glowing-screen-dark_160644251.htm 

View More
TECH

November 28, 2024

Flexbox in CSS: A Flexible Layout Solution

In modern web design, organizing and aligning elements on a page is crucial. One of the powerful tools that helps achieve this is Flexbox (Flexible Box Layout). Flexbox allows you to create flexible and easily adjustable layouts, providing an optimal user experience.

View More
1 9 10 11 12 13 22