This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Mezmo Platform Workshop

In this workshop, you will follow a hands-on guide showcasing how to take an open source Java application (PetClinic from Spring Boot) and enable collection of its log output to be aggregated and managed on the Mezmo Platform. Furthermore, Alert conditions will be defined, data transformation and reduction will be used to demonstrate how to control costs associated with collecting Observability data.

1 - Getting Started

Start the workshop here.

Overview

In this workshop, we will start by taking an existing Java Application (Spring PetClinic) and perform the following:

  • Show the steps of how to download, compile, package and run the Spring PetClinic application
  • Enable JSON-logging via configuration
  • Stand up an OpenTelemetry Collector to collect the logs output by the application and send them to the Mezmo platform
  • Configure an Observability Pipeline on Mezmo to filter out data we don’t want to collect
  • Define Alerts for conditions that we want notifications for.

Prerequisites

To get started, please install the required 3rd party software.

  1. This workshop utilizes the PetClinic java app from the Spring Project. To run it, it requires JDK 11 or greater to be installed. The JDK can be downloaded from https://www.oracle.com/java/technologies/downloads/. Download and install it locally. Verify your Java installation by running the following:

    java -version
    java version "14.0.2" 2020-07-14
    Java(TM) SE Runtime Environment (build 14.0.2+12-46)
    Java HotSpot(TM) 64-Bit Server VM (build 14.0.2+12-46, mixed mode, sharing)
    

    In this example, we are running JDK 14.0.2.

  2. Docker Desktop is also needed. It can be downloaded from https://www.docker.com/. Verify your Docker installation by running the following:

    docker --version
    Docker version 20.10.17, build 100c701
    

Reference Architecture

The reference architecture that will be established looks like this:

PetClinic Exercise

2 - Install the OpenTelemetry Collector

Install an OpenTelemetry Collector that will be used for collecting logs from the PetClinic app and forwarding to Mezmo.

Getting Started

The OpenTelemetry Collector is the core component of instrumenting infrastructure and applications. Its role is to collect and send:

  • Infrastructure metrics (disk, cpu, memory, etc)
  • Application Performance Monitoring (APM) traces
  • Host and application logs

In this workshop, we will be forwarding logs from the PetClinic application to Mezmo.

  1. To get started, download the latest release (required minimum of 0.71.0 or later) of the OpenTelemetry Contrib Collector. This can be found at this site:

    https://github.com/open-telemetry/opentelemetry-collector-releases/releases/latest

  2. If your binary has an installer, go ahead and run the installer. For this example, the darwin tarball (otelcol-contrib_0.71.0_darwin_arm64.tar.gz) will be installed in a new directory located off the user home directory:

    mkdir $HOME/otelcol
    cd $HOME/otelcol
    tar zxvf <location of downloaded file>/otelcol-contrib_0.71.0_darwin_arm64.tar.gz
    
  3. With the OpenTelemetry Collector installed, verify the contents of the directory:

    ls -l
    
    total 389112
    -rw-r--r--@   1 bmeyer  staff      11357 Feb  9 00:46 LICENSE
    -rw-r--r--@   1 bmeyer  staff        770 Feb  9 00:46 README.md
    -rwxr-xr-x@   1 bmeyer  staff  211532914 Feb  9 01:04 otelcol-contrib*
    
  4. Create a file named config.yaml in the same directory as the otelcol-contrib binary (e.g., $HOME/otelcol/config.yaml). Add the following to the file:

    #######################################
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:4317"
          http:
            endpoint: "0.0.0.0:4318"
    
    #######################################
    exporters:
      mezmo:
        ingest_url: "https://logs.mezmo.com/otel/ingest/rest"
        ingest_key: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
        timeout: 2s
    
      logging:
        verbosity: normal
    
      file:
        path: /tmp/otelcol.json
    
    #######################################
    service:
      pipelines:
        logs:
          receivers: [ otlp ]
          exporters: [ mezmo,logging ]
    
  5. For MacOS users: when running otelcol-contrib for the first time, you may get a security warning from Apple:

    Malicious Software Warning

    To resolve this, open System Preferences, select Security & Privacy. On the General tab, you should see a warning at the bottom related to “otelcol-contrib” was blocked from use because it is not from an identified developer.:

    Identified Developer Warning

    Click the Allow Anyway button.

  6. Run the OpenTelemetry Collector:

    ./otelcol-contrib --config config.yaml
    

    Again, for MacOS users, if this is the first time running otelcol-contrib, you may get a popup message as before:

    Malicious Software Warning

    but this time you will be able to open it by clicking Open. At this point, MacOS will no longer prompt you with these warnings.

  7. Confirm the collector starts appropriately. You should see output similar to:

    2023-02-09T10:37:53.095-0600	info	service/telemetry.go:90	Setting up own telemetry...
    2023-02-09T10:37:53.095-0600	info	service/telemetry.go:116	Serving Prometheus metrics	{"address": ":8888", "level": "Basic"}
    2023-02-09T10:37:53.095-0600	info	exporter/exporter.go:286	Development component. May change in the future.	{"kind": "exporter", "data_type": "logs", "name": "logging"}
    2023-02-09T10:37:53.096-0600	info	service/service.go:140	Starting otelcol-contrib...	{"Version": "0.71.0", "NumCPU": 10}
    2023-02-09T10:37:53.096-0600	info	extensions/extensions.go:41	Starting extensions...
    2023-02-09T10:37:53.096-0600	warn	internal/warning.go:51	Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks	{"kind": "receiver", "name": "otlp", "data_type": "logs", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
    2023-02-09T10:37:53.096-0600	info	otlpreceiver@v0.71.0/otlp.go:94	Starting GRPC server	{"kind": "receiver", "name": "otlp", "data_type": "logs", "endpoint": "0.0.0.0:4317"}
    2023-02-09T10:37:53.096-0600	warn	internal/warning.go:51	Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks	{"kind": "receiver", "name": "otlp", "data_type": "logs", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
    2023-02-09T10:37:53.096-0600	info	otlpreceiver@v0.71.0/otlp.go:112	Starting HTTP server	{"kind": "receiver", "name": "otlp", "data_type": "logs", "endpoint": "0.0.0.0:4318"}
    2023-02-09T10:37:53.096-0600	info	service/service.go:157	Everything is ready. Begin running and processing data.
    

Test the Collector

In this section, we’ll test out the OTEL Collector to confirm it’s working as expected. To get started, we’ll create a sample log entry in JSON format.

  1. Create a file named samplelog.json and add this to the file:

    {
      "resourceLogs": [
        {
          "resource": {},
          "scopeLogs": [
            {
              "scope": {},
              "logRecords": [
                {
                  "observedTimeUnixNano": "1664830800000000000",
                  "body": {
                    "stringValue": "This is the sample log message."
                  },
                  "attributes": [
                    {
                      "key": "log.file.name",
                      "value": {
                          "stringValue": "access_log"
                      }
                    }
                  ],
                  "traceId": "",
                  "spanId": ""
                }
              ]
            }
          ]
        }
      ]
    }
    
  2. With the OTEL Collector running in a separate terminal, run this curl command from a new terminal:

    curl -vi http://localhost:4318/v1/logs -H "Content-Type: application/json" -d @samplelog.json
    
    *   Trying 127.0.0.1:4318...
    * Connected to localhost (127.0.0.1) port 4318 (#0)
    > POST /v1/logs HTTP/1.1
    > Host: localhost:4318
    > User-Agent: curl/7.85.0
    > Accept: */*
    > Content-Type: application/json
    > Content-Length: 615
    >
    * Mark bundle as not supporting multiuse
      < HTTP/1.1 200 OK
      HTTP/1.1 200 OK
      < Content-Type: application/json
      Content-Type: application/json
      < Date: Thu, 09 Feb 2023 16:35:39 GMT
      Date: Thu, 09 Feb 2023 16:35:39 GMT
      < Content-Length: 21
      Content-Length: 21
    
    <
    * Connection #0 to host localhost left intact
      {"partialSuccess":{}}
    
  3. Now take a look at the output of the OTEL Collector. There should be a new entry that looks similar to this:

    2023-02-09T10:37:55.411-0600	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "logging", "#logs": 1}
    2023-02-09T10:37:55.411-0600	info	ResourceLog #0
    Resource SchemaURL:
    ScopeLogs #0
    ScopeLogs SchemaURL:
    InstrumentationScope
    LogRecord #0
    ObservedTimestamp: 2022-10-03 21:00:00 +0000 UTC
    Timestamp: 1970-01-01 00:00:00 +0000 UTC
    SeverityText:
    SeverityNumber: Unspecified(0)
    Body: Str(This is the sample log message.)
    Attributes:
         -> log.file.name: Str(access_log)
    Trace ID:
    Span ID:
    Flags: 0
            {"kind": "exporter", "data_type": "logs", "name": "logging"}
    

Verify Connection to Mezmo

  1. Sign into your Mezmo account at https://app.mezmo.com.

  2. When you first sign into Mezmo, you will land on the ViewEverything page. At the bottom of the page, in the Search field, enter app:OpenTelemetryExporter as the search string:

    App Search

    and hit the enter key. The results of the curl command (step 2) should show up in the search results on Mezmo:

    App Search

3 - Spring PetClinic Application

Introduce the PetClinic application which is a Java application that will be used as the test bed for this workshop.

For this exercise, we will use the Spring PetClinic application. This is a very popular sample java application built with Spring framework (Spring Boot).

Get Started

  1. To get started, clone the PetClinic repository starting from our home directory:

    cd $HOME 
    git clone https://github.com/spring-projects/spring-petclinic
    
  2. Change into the spring-petclinic directory:

    cd spring-petclinic
    
  3. Start a MySQL database for Pet Clinic to use:

    docker run -d -e MYSQL_USER=petclinic -e MYSQL_PASSWORD=petclinic -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=petclinic -p 3306:3306 docker.io/mysql:5.7.8
    
  4. Next, run the maven command to compile/build/package Pet Clinic:

    ./mvnw package -Dmaven.test.skip=true
    
  5. Once the compilation is complete, you can run the application with the following command:

    java -jar target/spring-petclinic-*.jar --spring.profiles.active=mysql
    
  6. You can verify that the application is running by visiting http://localhost:8080. Click around, generate errors, add visits, etc.

Enable Structured Logging

The PetClinic app is built using standard Log4j formatted log entries. These log entries, while easily human readable, are much more difficult to machine parse due to inconsistencies in the log format. It’s easy enough to switch from human-readable logs to structured logs in JSON format.

  1. Edit pom.xml and insert the following highlighted list of dependencies at the end of the <dependencies>...</dependencies> section (around line 108-109):

    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    
       <dependency>
         <groupId>org.springframework.boot</groupId>
         <artifactId>spring-boot-devtools</artifactId>
         <optional>true</optional>
       </dependency>
    
       <!-- logback -->
       <dependency>
         <groupId>ch.qos.logback.contrib</groupId>
         <artifactId>logback-json-classic</artifactId>
         <version>0.1.5</version>
       </dependency>
    
       <dependency>
         <groupId>ch.qos.logback.contrib</groupId>
         <artifactId>logback-jackson</artifactId>
         <version>0.1.5</version>
       </dependency>
    
       <dependency>
         <groupId>com.fasterxml.jackson.core</groupId>
         <artifactId>jackson-databind</artifactId>
         <version>2.14.0-rc1</version>
       </dependency>
    
       <dependency>
         <groupId>com.fasterxml.jackson.core</groupId>
         <artifactId>jackson-core</artifactId>
         <version>2.13.4</version>
       </dependency>
       <!-- end of logback -->
     </dependencies>
    

    A git diff will look like this:

    git diff
    
    diff --git a/pom.xml b/pom.xml
    index d29355c..eeec4dc 100644
    --- a/pom.xml
    +++ b/pom.xml
    @@ -106,6 +106,32 @@
           <artifactId>spring-boot-devtools</artifactId>
           <optional>true</optional>
         </dependency>
    +
    +    <!-- logback -->
    +    <dependency>
    +      <groupId>ch.qos.logback.contrib</groupId>
    +      <artifactId>logback-json-classic</artifactId>
    +      <version>0.1.5</version>
    +    </dependency>
    +
    +    <dependency>
    +      <groupId>ch.qos.logback.contrib</groupId>
    +      <artifactId>logback-jackson</artifactId>
    +      <version>0.1.5</version>
    +    </dependency>
    +
    +    <dependency>
    +      <groupId>com.fasterxml.jackson.core</groupId>
    +      <artifactId>jackson-databind</artifactId>
    +      <version>2.14.0-rc1</version>
    +    </dependency>
    +
    +    <dependency>
    +      <groupId>com.fasterxml.jackson.core</groupId>
    +      <artifactId>jackson-core</artifactId>
    +      <version>2.13.4</version>
    +    </dependency>
    +    <!-- end of logback -->
       </dependencies>
    
       <build>
    
  2. The structured logging uses the Logback framework, so let’s configure it to output as we need. Create a new file src/main/resources/logback.xml and set the contents as:

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration scan="true">
        <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
            <encoder>
                <pattern>%date{MMM dd HH:mm:ss} %-5level petclinic[1]: %msg %n</pattern>
            </encoder>
        </appender>
    
        <appender name="FileLogger" class="ch.qos.logback.core.FileAppender">
            <file>/tmp/petclinic.json</file>
            <append>false</append>
            <immediateFlush>true</immediateFlush>
    
            <layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
                <jsonFormatter class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
                    <prettyPrint>false</prettyPrint>
                </jsonFormatter>
                <timestampFormat>yyyy-MM-dd' 'HH:mm:ss</timestampFormat>
                <appendLineSeparator>true</appendLineSeparator>
            </layout>
        </appender>
    
        <root level="DEBUG">
            <appender-ref ref="FileLogger"/>
            <appender-ref ref="STDOUT"/>
        </root>
    </configuration>
    
  3. We can now repackage the PetClinic application with:

    ./mvnw package -Dmaven.test.skip=true
    
  4. Rerun the application with the following command:

    java -jar target/spring-petclinic-*.jar --spring.profiles.active=mysql
    

    After initialization, the remaining output should be JSON format:

               |\      _,,,--,,_
              /,`.-'`'   ._  \-;;,_
    _______ __|,4-  ) )_   .;.(__`'-'__     ___ __    _ ___ _______
    |       | '---''(_/._)-'(_\_)   |   |   |   |  |  | |   |       |
    |    _  |    ___|_     _|       |   |   |   |   |_| |   |       | __ _ _
    |   |_| |   |___  |   | |       |   |   |   |       |   |       | \ \ \ \
    |    ___|    ___| |   | |      _|   |___|   |  _    |   |      _|  \ \ \ \
    |   |   |   |___  |   | |     |_|       |   | | |   |   |     |_    ) ) ) )
    |___|   |_______| |___| |_______|_______|___|_|  |__|___|_______|  / / / /
    ==================================================================/_/_/_/
    
    :: Built with Spring Boot :: 2.7.3
    
    2022-11-08 16:33:41.664  INFO 12708 --- [           main] o.s.s.petclinic.PetClinicApplication     : Starting PetClinicApplication v2.7.3 using Java 14.0.2 on mbp1 with PID 12708 (/Users/bmeyer/spring-petclinic/target/spring-petclinic-2.7.3.jar started by bmeyer in /Users/bmeyer/spring-petclinic)
    2022-11-08 16:33:41.670  INFO 12708 --- [           main] o.s.s.petclinic.PetClinicApplication     : The following 1 profile is active: "mysql"
    2022-11-08 16:33:44.722  INFO 12708 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
    2022-11-08 16:33:44.863  INFO 12708 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 122 ms. Found 2 JPA repository interfaces.
    2022-11-08 16:33:46.914  INFO 12708 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
    2022-11-08 16:33:46.935  INFO 12708 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
    2022-11-08 16:33:46.935  INFO 12708 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet engine: [Apache Tomcat/9.0.65]
    2022-11-08 16:33:47.107  INFO 12708 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
    2022-11-08 16:33:47.108  INFO 12708 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 5313 ms
    2022-11-08 16:33:48.447  INFO 12708 --- [           main] org.ehcache.core.EhcacheManager          : Cache 'vets' created in EhcacheManager.
    2022-11-08 16:33:48.485  INFO 12708 --- [           main] org.ehcache.jsr107.Eh107CacheManager     : Registering Ehcache MBean javax.cache:type=CacheStatistics,CacheManager=urn.X-ehcache.jsr107-default-config,Cache=vets
    2022-11-08 16:33:48.566  INFO 12708 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...
    2022-11-08 16:33:49.015  INFO 12708 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.
    2022-11-08 16:33:49.404  INFO 12708 --- [           main] o.hibernate.jpa.internal.util.LogHelper  : HHH000204: Processing PersistenceUnitInfo [name: default]
    2022-11-08 16:33:49.537  INFO 12708 --- [           main] org.hibernate.Version                    : HHH000412: Hibernate ORM core version 5.6.10.Final
    2022-11-08 16:33:49.889  INFO 12708 --- [           main] o.hibernate.annotations.common.Version   : HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
    2022-11-08 16:33:50.182  INFO 12708 --- [           main] org.hibernate.dialect.Dialect            : HHH000400: Using dialect: org.hibernate.dialect.MySQL57Dialect
    2022-11-08 16:33:51.781  INFO 12708 --- [           main] o.h.e.t.j.p.i.JtaPlatformInitiator       : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
    2022-11-08 16:33:51.801  INFO 12708 --- [           main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
    2022-11-08 16:33:54.914  INFO 12708 --- [           main] o.s.b.a.e.web.EndpointLinksResolver      : Exposing 13 endpoint(s) beneath base path '/actuator'
    2022-11-08 16:33:55.032  INFO 12708 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
    2022-11-08 16:33:55.055  INFO 12708 --- [           main] o.s.s.petclinic.PetClinicApplication     : Started PetClinicApplication in 14.627 seconds (JVM running for 15.515)
  5. Our Logback configuration includes an entry to write the logs to the file /tmp/petclinic.log. Verify logs are being written to it as these will be used in the next section:

    more /tmp/petclinic.json
    
    {"timestamp":"2022-11-08 16:48:10","level":"INFO","thread":"main","logger":"org.springframework.samples.petclinic.PetClinicApplication","message":"Starting PetClinicApplication v2.7.3 using Java 14.0.2 on mbp1 with PID 13883 (/Users/bmeyer/spring-petclinic/target/spring-petclinic-2.7.3.jar started by bmeyer in /Users/bmeyer/spring-petclinic)","context":"default"}
    {"timestamp":"2022-11-08 16:48:10","level":"DEBUG","thread":"background-preinit","logger":"org.jboss.logging","message":"Logging Provider: org.jboss.logging.Log4j2LoggerProvider","context":"default"}
    {"timestamp":"2022-11-08 16:48:10","level":"INFO","thread":"background-preinit","logger":"org.hibernate.validator.internal.util.Version","message":"HV000001: Hibernate Validator 6.2.4.Final","context":"default"}
    {"timestamp":"2022-11-08 16:48:10","level":"INFO","thread":"main","logger":"org.springframework.samples.petclinic.PetClinicApplication","message":"The following 1 profile is active: \"mysql\"","context":"default"}
    {"timestamp":"2022-11-08 16:48:10","level":"DEBUG","thread":"background-preinit","logger":"org.hibernate.validator.internal.xml.config.ValidationXmlParser","message":"Trying to load META-INF/validation.xml for XML based Validator configuration.","context":"default"}
    ...

4 - Log Analysis

With the ability to collect the logs from the PetClinic app, we can browse the logs on Mezmo.

Connect OpenTelemetry to the PetClinic logs

In this section, we will update the OpenTelemetry Collector’s configuration to read the logs output by the PetClinic app and forward them on to Mezmo.

To get started, we’ll need to modify the receivers section of the OpenTelemetry configuration to add a filelog receiver. This receiver will be responsible for reading the /tmp/petclinic.json file into the running collector.

  1. Edit $HOME/otelcol/config.yaml and add the highlighted lines to the configuration under the receivers section:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    
    #######################################
    receivers:
      filelog:
        include:
          - /tmp/petclinic.json
        include_file_name: false
        start_at: end
        operators:
          - type: json_parser
            timestamp:
              parse_from: attributes.timestamp
              layout_type: gotime
              layout: '2006-01-02 15:04:05'
            severity:
              parse_from: attributes.level
    
  2. With the new receiver defined, we need to add it to the logs pipeline. Under the servicepipelineslogs section, add filelog to the list of receivers:

    service:
      pipelines:
        logs:
          receivers: [ otlp, filelog ]
          exporters: [ logging, mezmo ]
    
  3. Save the configuration.

  4. Restart the collector:

    ./otelcol-contrib --config config.yaml
    

Browse PetClinic logs in Mezmo

  1. Restart the PetClinic app:

    java -jar target/spring-petclinic-*.jar --spring.profiles.active=mysql
    

    the console logs will appear locally as:

    ...
    Oct 24 15:06:35 [main] DEBUG _.s.w.s.H.Mappings:
               o.s.b.a.w.s.e.BasicErrorController:
               { [/error]}: error(HttpServletRequest)
               { [/error], produces [text/html]}: errorHtml(HttpServletRequest,HttpServletResponse)
    Oct 24 15:06:35 [main] DEBUG _.s.w.s.H.Mappings: 'beanNameHandlerMapping' {}
    Oct 24 15:06:35 [main] DEBUG _.s.w.s.H.Mappings: 'resourceHandlerMapping' {/webjars/**=ResourceHttpRequestHandler [classpath [META-INF/resources/webjars/]], /**=ResourceHttpRequestHandler [classpath [META-INF/resources/], classpath [resources/], classpath [static/], classpath [public/], ServletContext [/]]}
    Oct 24 15:06:36 [main] INFO  o.s.b.a.e.w.EndpointLinksResolver: Exposing 13 endpoint(s) beneath base path '/actuator'
    Oct 24 15:06:36 [main] INFO  o.a.c.h.Http11NioProtocol: Starting ProtocolHandler ["http-nio-8080"]
    Oct 24 15:06:36 [main] INFO  o.s.b.w.e.t.TomcatWebServer: Tomcat started on port(s): 8080 (http) with context path ''
    Oct 24 15:06:36 [main] INFO  o.s.s.p.PetClinicApplication: Started PetClinicApplication in 15.448 seconds (JVM running for 17.292)
    Oct 24 15:06:59 [HikariPool-1 housekeeper] DEBUG c.z.h.p.HikariPool: HikariPool-1 - Pool stats (total=10, active=0, idle=10, waiting=0)
    Oct 24 15:06:59 [HikariPool-1 housekeeper] DEBUG c.z.h.p.HikariPool: HikariPool-1 - Fill pool skipped, pool is at sufficient level.
    
  2. Navigate to Mezmo at https://app.mezmo.com. In the dashboard, click the Views button () → Everything (). You should see the same log entries appear at the bottom of the dashboard:

  3. Each log entry can be expanded to reveal additional details and metadata about the log entry by clicking the disclosure icon :

    You’ll notice that the additional metadata information being output by the PetClinic app like context, level, logger, and thread appears in the _meta portion of the log entry. We can query on this information by entering it as a search string field. For example, if we wanted to find all entries where the logger is HikariPool, we could enter this search string:

    _meta.logger:com.zaxxer.hikari.pool.HikariPool
    

    which would give us a filtered view that only includes the entries we wish to see:

  4. Having a filtered view that isolates only the logs coming from our PetClinic app is very convenient. Let’s save this View so it can easily be recalled in the future. At the top of the log view, click the Unsaved View drop down and select Save as new view:

    Save As New View
  5. Name the new view PetClinic App, leave the Category and Alert fields blank for now.

    Create New View

    Click Save View.

  6. The new View is saved under the Uncategorized group of views on the list:

    PetClinic App View

    Now, anytime we want to view only the logs from the PetClinic app, we can click on this view to see the filtered set of log entries.

5 - Pipelining

Introduce the concept of an Observability Pipeline and how it can be used to free developer resources and allow SRE teams to filter log information coming from PetClinic to make their ability to manage and operate the application more efficient.

With the PetClinic App View created, one thing becomes immediately apparent- the PetClinic app is very chatty outputting DEBUG statements related to its pool stats every 30 seconds. While it’s simple enough to edit the logging configuration for the app itself, it would require changes to the config file, maybe there’s additional DEBUG info that we do want to see, just not that of the HikariPool output.

For the purposes of this exercise, we will assume the PetClinic app is a 3rd party app that we don’t have control over but we still want to remove the DEBUG output that is cluttering up our Log Analysis. For that, we can utilize a Pipeline that will look for those entries and drop them before passing the non-DEBUG entries on to Log Analysis.

Build a Pipeline

  1. To get started, click the Pipeline icon in the top-left corner of the dashboard:

    Pipeline

    then click on New Pipeline:

    New Pipeline
  2. Name the new pipeline PetClinic preprocess:

    Pipeline Name

    Click Save.

  3. Next we’ll add a Source that will receive log data from the OpenTelemetry Collector we configured earlier. Click the Sources - Add button.

  4. We will receive the OpenTelemetry logs via HTTP. In the list of available sources, type http in the filter which should highlight the HTTP Source

    Sources - HTTP

    Click on the HTTP source.

  5. Configure the following:

    • Title as OTEL Ingest
    • add a meaningful Description such as Receive logs from OTEL
    • leave the Decoding Method as json

    Configured - HTTP

    Click Save.

  6. With the OTEL Ingest endpoint created, we need to edit its configuration to create a new Access Key. Click on OTEL Ingest endpoint and then click the Create new key button.

    HTTP - Create new key
  7. For the Title, enter Ingest Key and click Create.

  8. Be sure to copy and save the value of the new key as well as the URL to this specific endpoint as we will need to use them in a later step:

    HTTP - Configured key

    In this example:

    • the value of Ingest Key is +19opdnwjWmDUD302J2jsT9xCF87Ibu0rk2t95jC/ps= and
    • the URL is https://pipeline.mezmo.com/v1/b745ce28-546e-11ed-a64b-d233826e7531.

    Click Update.

  9. Our pipeline is starting to take shape as:

    PetClinic Ingest Pipeline - Step 1
  10. Next, the log data received from OTEL follows the format specified in the Send Log Lines API. An example of a JSON payload with two log entries is sent as:

{
  "lines": [
    {
      "timestamp": 1666275427712,
      "line": "\u003c135\u003eOct 20 09:17:02 mbp1 HikariPool-1 - Pool stats (total=10, active=0, idle=10, waiting=0)",
      "app": "",
      "level": "info",
      "meta": {}
    },
    {
      "timestamp": 1666275427712,
      "line": "\u003c135\u003eOct 20 09:17:02 mbp1 HikariPool-1 - Fill pool skipped, pool is at sufficient level.",
      "app": "",
      "level": "info",
      "meta": {}
    }
  ]
}

We want to process each log entry individually, so we will use the Unroll processor to do this. This processor will convert a JSON array into individual JSON objects that will appear at the output of the processor.

Click ProcessorsAdd. In the Processor list, filter on unroll and select the Unroll Processor:

Add Unroll Processor

Click on the Unroll processor.

  1. Configure the following:

    • Title as Unroll Logs
    • add a meaningful Description such as Convert logs array to individual logs for processing
    • set the Field value to .lines

    Configured - Unroll

    Click Save.

    Our pipeline now appears as:

    Unconnected HTTP & Unroll
  1. Hover your mouse over the right edge of the HTTP source we configured (OTEL Ingest). An attach anchor will appear:

Click and drag from the HTTP source to the Unroll processor. The Unroll processor will be highlighted:

With the Unroll processor highlighted, release the mouse click and the HTTP source will now send its output to the Unroll processor.

The pipeline should now appear as:

PetClinic Ingest Pipeline - Step 2
  1. With the logs converted to individual entries, we can accomplish what we set out to do- remove the DEBUG entries. To accomplish this, we’ll add a Filter processor.

Click ProcessorsAdd. In the Processor list, filter on filter and select the Filter Processor:

Click on the Filter processor.

  1. Configure the following:

    • Title as Discard DEBUG Msgs
    • add a meaningful Description such as Allow non-DEBUG messages to pass
    • set the Field value to .lines.level
    • set the Operator to not_equal
    • set the Value to DEBUG (the value is case-sensitive)

    Configured - Filter

    Click Save.

  2. Connect the output from the Unroll processor to the input of the Filter processor similar to Step 8. Our pipeline now appears as:

    PetClinic Ingest Pipeline - Step 3
  3. To prepare to send our log entries from our pipeline to Log Analysis, we must first convert the JSON format to a string format. For this, we use the Stringify processor. Click ProcessorsAdd. In the Processor list, filter on stringify and select the Stringify Processor:

    Add Stringify Processor

    Click on the Stringify processor.

  4. Configure the following:

    • Title as Stringify
    • add a meaningful Description such as Convert JSON to text

    Configured - Stringify

    Click Save.

  5. Connect the output from the Filter processor to the input of the Stringify processor similar to Step 8. Our pipeline now appears as:

    PetClinic Ingest Pipeline - Step 4
  6. Finally, we are ready to take the output of the Stringify processor, and send it out of our pipeline and over to Log Analysis. To do so, we’ll add a Destination. The destination will require an ingest key as part of its configuration. To obtain your ingest key, click Settings () → OrganizationAPI Keys.

    We can use the existing Ingestion Key by simply clicking on the clipboard to copy it:

    We will paste this value into the Destination that will be configured next.

  7. Click DestinationsAdd. In the Destinations list, filter on log analysis and select the Mezmo Log Analysis destination:

    Add Log Analysis

    Click on the Mezmo Log Analysis destination.

  8. Configure the following:

    • Title as Send to LA
    • add a meaningful Description such as Send logs to Log Analysis
    • leave End-to-end Acknowledgement as checked
    • set the Mezmo Host value to logs.mezmo.com
    • set the Hostname to petclinic-pipeline
    • paste the ingestion key in the Ingestion Key field that we looked up in the previous step

    Configured - Log Analysis

    Click Save.

  9. Lastly, connect the output of the Stringify processor to the Send to LA destination. The final pipeline should appear as:

    Completed Pipeline
  10. With the pipeline finalized, the last step is to deploy the pipeline so it is active. Click the Deploy pipeline button:

    Add Log Analysis

    When the deployment completes, a blue checkmark will appear next to the pipeline name indicating the pipeline is now active:

    Add Log Analysis

Reconfigure OTEL

You may recall when we installed and configured the OpenTelemetry Collector that we set it up to send the logs to the Log Analysis endpoint. We now want to reconfigure the collector to, instead, send logs to our pipeline endpoint. This will start the flow of logs through the pipeline and the processors we’ve configured above.

  1. Stop the OTEL Collector if it’s running.

  2. Edit the $HOME/otelcol/config.yaml file.

    • Change the value of ingest_url to the URL we saved from Step 8 in the previous section.
    • Change the value of ingest_key to the Ingest Key value we saved from Step 8 in the previous section.

    The mezmo section will look similar to this:

    #######################################
    exporters:
      mezmo:
        ingest_url: "https://pipeline.mezmo.com/v1/b745ce28-546e-11ed-a64b-d233826e7531"
        ingest_key: "+19opdnwjWmDUD302J2jsT9xCF87Ibu0rk2t95jC/ps="
    

    Save the changes and exit.

  3. Restart the collector with:

    ./otelcol-contrib --config config.yaml
    

Testing

At this point, any new log messages output by the PetClinic app will get picked up by the collector and sent to the pipeline. The pipeline should filter out any messages with level=DEBUG set and forward on the rest to Log Analysis.

  1. Head over to PetClinic App View we created earlier. No new DEBUG messages should be showing up now.

  2. One method for generating INFO messages in the PetClinic app is to restart the PetClinic app. Both shutting down as well as starting up the app generate quite a bit of INFO and DEBUG entries. Shut down the PetClinic app and you should see only the INFO logs show up in the View:

    PetClinic Shutdown Logs

    likewise, starting the PetClinic app produces similar INFO-only logs:

    java -jar target/spring-petclinic-*.jar --spring.profiles.active=mysql
    
    PetClinic Startup Logs

6 - Troubleshooting

Discuss techniques for troubleshooting setups, the flow of data and observing data at various stages in a pipeline.

This section will highlight some techniques for troubleshooting and collecting insight at various steps of the pipeline as data flows through the system.

Webhook.site

Webhook.site is a free website that can be used to make HTTP requests to and see the full HTTP payload to verify its contents. By creating a Destination in pipeline and connecting the output of a Processor to it, you can view the data on Webhook.site as it appeared at that stage of the pipeline.

  1. Visit https://webhook.site/ in a browser.

  2. Copy your unique URL to the clipboard:

    Webhook.site URL
  3. Open the PetClinic Preprocess pipeline and click DestinationsAdd.

  4. Select the HTTP destination.

  5. Configure the following:

    • Title as To Webhook.site
    • add a meaningful Description such as Send output to Webhook.site
    • leave End-to-end Acknowledgement as checked
    • set the URI to the value you copied from the Webhook.site page
    • set the Encoding to json
    • leave Compression and Authentication → Strategy as none

    Configured - Webhook.site

    Click Save.

  6. With the Webhook.site destination configured, we can select the output from any Source or Processor and send it to the destination. For this example, drag a connection from the output of the Discard DEBUG Msgs filter to the To Webhook.site destination. The pipeline will now look like this:

    Pipeline with Webhook.site

    This will send a copy of the output from the Discard DEBUG Msgs processor to Webhook.site where we can inspect the contents of the payload for debugging purposes:

    Pipeline with Webhook.site

7 - Alerting

No need to stare at live-tail logs all day, use Mezmo’s Alerting capabilities to watch for conditions and notify you when something needs your attention.

The PetClinic app has a unique menu named ERROR that, when clicked, will throw a Java Exception that shows up in the log output with level=ERROR. Because this log level is not DEBUG, it will flow through the entire output and end up in Log Analysis. From there, we can add an Alert that will watch for these log entries, and handle appropriately. In this example, we’ll send an email message to a recipient.

  1. To get started, click the ERROR menu item in the PetClinic app to generate an error message:

    PetClinic - Error

    In the console of the Java app, you should see the error message it generates:

    Nov 04 10:14:02 [http-nio-8080-exec-3] ERROR o.a.c.c.C.[.[.[.[dispatcherServlet]: Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.RuntimeException: Expected: controller used to showcase what happens when an exception is thrown] with root cause
    java.lang.RuntimeException: Expected: controller used to showcase what happens when an exception is thrown
    at org.springframework.samples.petclinic.system.CrashController.triggerException(CrashController.java:33)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:564)
    at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
    at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150)
    ...

    And the corresponding entry in Mezmo appears as:

    PetClinic - Error
  2. Sign into Mezmo at https://app.mezmo.com.

  3. We are interested in any log that comes from the PetClinic app and has a level=ERROR. As such, search for these log entries using this search string in the search field:

    _meta.service_name:PetClinic _meta.level:ERROR
    
    PetClinic Error Search
  4. Let’s save this search as a new View. Click on the Unsaved View drop down and select Save as new view.

  5. Set the name to PetClinic Errors and select the PETCLINIC Category we created in the Log Analysis section.

    Create new view - PetClinic Errors

    Click Save View.

  6. With the new View created, we can attach an alert to it by clicking the PetClinic Errors dropdown and selecting Attach an alert:

    Create new view - PetClinic Errors
  7. In the Alert dialog, select View-specific alert then select Email.

  8. In the expanded Alert dialog:

    • Select a Recipient for the email
    • Uncheck the At the end of 30 seconds option
    • Check the Immediately after 1 Line option
    • Leave the rest of the defaults as-is

    Click Save Alert.

  9. Head back over to the PetClinic app and click the ERROR menu item again. A new error message will output a new ERROR entry which will trigger the email alert. Check your inbox for the email notification:

    Alert Email

    You’ll notice you have the option mute this notification for various durations directly from the email.

    There are many additional notification types that can be attached to this alert including Slack, PagerDuty as well as invoking a Webhook to take a corrective action if desired.