Run the OpenTelemetry Collector in Kubernetes for Front-End Tracing

TL;DR: Run the OpenTelemetry Collector helm chart with --values=values.yaml pointing to the values.yaml in this gist. Use kubectl get services to find the collector’s URL.

The rest of this blog post walks through the whole process of setting this up. This will teach you the why and how of each step, so that you can adjust it to your needs. There’s also a troubleshooting section for when stuff doesn’t work.

Warning: this is a LOT. My intention is to break it up into smaller, better parts and post it separately. That’ll be slow. So far, here’s How to send a test span to a collector. Meanwhile, here’s everything.

Front-End Tracing needs a collector

You want to see some OpenTelemetry from your client? You’ll need client libraries, a collector, and a backend. This post walks through one way to set up a collector in Kubernetes. This one works for sending traces from a web application to Honeycomb.

You could configure OpenTelemetry in your application to send traces directly from the browser to Honeycomb. Why not do this? 

For one, you’d expose your Honeycomb API key to the world. In a proof-of-concept, that might be fine.

For two, the easiest way to transmit traces is to send JSON over HTTP, and Honeycomb doesn’t currently support this. Honeycomb accepts traces over gRPC or over HTTP with Protobuf, both more efficient and better supported

Your very own OpenTelemetry collector will keep your API key private. It will happily receive JSON over HTTP and convert that to more efficient formats for Honeycomb.

A collector can run in Kubernetes

If you already run stuff in Kubernetes, then the collector can run there too. Kubernetes is good at running stuff.

Maybe you already run the OpenTelemetry collector to gather back-end traces and metrics. I suggest using a different instance to collect front-end traces. Opening your existing collector to traffic from the internet raises risk. If someone spams it with garbage, it’ll run out of memory and your own traces will be lost. Best to keep them separate, and keep each of them as simple as possible. Have them use distinct API keys while you’re at it, so that you can disable the front-end one independently.

This front-end collector functions as a web service, so it makes sense as a Deployment in Kubernetes.

Helm can deploy a collector to Kubernetes

To run an app in Kubernetes, there’s all kinds of things to define, lots of YAML. A Helm chart defines all those things, generating the YAML that Kubernetes needs. Helm is like brew on a Mac, like the Windows store on my PC–except more configurable. Configurable, with more YAML.

OpenTelemetry publishes a Helm chart for deploying the collector to Kubernetes. Its README nicely describes all the defaults, which includes Jaeger and Zipkin and metrics and logs. I want my collector to accept only traces over HTTP and send them to Honeycomb. 

Let’s do this

This post walks through the configuration process incrementally, with troubleshooting tips. If you want your collector to work a little differently, this post will still help you. If you’re like “just give me the config that works!” then see the TL;DR.

Prerequisites

Before starting this process, I have

  • a Kubernetes cluster (mine is in EKS), and all the permissions I need in it (I’m admin 😒)
  • a bash prompt with…
    • kubectl installed and configured to operate on that cluster
    • helm installed
  • curl (or Postman, something to send a test trace)
  • a Honeycomb account and an API key

Step 1: Run the Helm chart to install the collector

Following instructions in the OpenTelemetry Collector Helm chart, add the chart repository (once):

helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts

and then specify a name and a chart to install:

helm install collectron open-telemetry/opentelemetry-collector

Here, “collectron” is the name I’ve given this collector deployment. You may prefer a more sophisticated name. It does have to be a “lowercase RFC 1123 subdomain,” so stick with lowercase letters, numbers, - and .. It didn’t let me name it “collecTRON.”

Also, it didn’t work. Here’s the error I saw:

Error: execution error at (opentelemetry-collector/templates/NOTES.txt:14:3): [ERROR] 'mode' must be set. See https://github.com/open-telemetry/opentelemetry-helm-charts/blob/main/charts/opentelemetry-collector/UPGRADING.md for instructions.

We need to specify some options. We want to do that anyway, so let’s go.

Step 2: Iterate until something works

To configure the installation, create a yaml file. I called mine values.yaml. The meaning of this yaml is specific to the particular Helm chart. See its full values file for what you can override.

First, set the mode value that it asked for. The full values file says: `Valid values are "daemonset" and "deployment". A Kubernetes daemonset would make sense for a backend collector gathering traces from other pods, but we’re setting up a collector to listen from traces from the client, over the internet. A Kubernetes deployment makes more sense.

Put in the file: 
mode: deployment

You can try to install again, but it won’t let you reuse the name. Instead, iterate with helm upgrade. Pass our yaml to --values.

Upgrade the helm installation

helm upgrade collectron open-telemetry/opentelemetry-collector --values values.yaml

When this works, the output is something like:

Release "collectron" has been upgraded. Happy Helming!
NAME: collectron
LAST DEPLOYED: Fri Jul 8 13:16:07 2022
NAMESPACE: default
STATUS: deployed
REVISION: 19
TEST SUITE: None
NOTES:

Check that the collector is running

We expect Kubernetes to run a pod with a name that starts with the installation name, collectron. (The Helm chart appends “opentelemetry-collector” if your name doesn’t already contain this.)

I see this line:
collectron-opentelemetry-collector-766b88bbf8-gr482 1/1 Running 0 2m18

Check that there is exactly one of them.
Check the last column to see whether this one started up after your last helm upgrade. (troubleshooting: My pod didn’t restart after the upgrade.)
Check that the status is “Running.” (Troubleshooting: My pod stays in PENDING status forever.  and My pod status is CrashLoopBackoff.)

Look at the collector’s logs

The full name of the pod lets you request its logs. Copy that from the output of kubectl get pods and then pass it to kubectl logs:

kubectl logs collectron-opentelemetry-collector-766b88bbf8-gr482

Here’s a one-liner that you can repeat after the full name of the pod changes. If you have additional opentelemetry-collector pods, substitute your deployment’s full name in.

kubectl get pods -o name | grep opentelemetry-collector | sed 's#pod/##' | xargs kubectl logs

Hurray, logs! Now we have a feedback loop.

Look at the container ports

The Helm chart also sets up the ports on the collector’s container. See their numbers and names:

kubectl get pods -o name | grep opentelemetry-collector | sed 's#pod/##' | xargs kubectl get pod -o jsonpath='{range .spec.containers[].ports[*]}{.containerPort}{"\t"}{.name}{"\n"}{end}'

Summary: Iterating on configuration (we will link to this a lot)

Change values.yaml and save the file.
Check that exactly one is running.
Tail its log.

helm upgrade collectron open-telemetry/opentelemetry-collector --values values.yaml
kubectl get pods
kubectl get pods -o name | grep opentelemetry-collector | sed 's#pod/##' | xargs kubectl logs -f

Step 4: Turn off what we don’t want

In this first run, the logs contain a bunch of stuff about starting up receivers, and then a lot of MetricsExporter output.
I don’t want all that! I don’t want any metrics, and I only want one receiver and one exporter. Time for the next iteration.

The OpenTelemetry collector reads its configuration from a file. The Helm chart creates that file based on its template, merged with the config section in our values.yaml.

Our collector configuration goes under config in the values.yaml . The first thing to do is override the chart’s defaults. The docs provide part of the example I want, which disables logs and metrics and only receives traces with the OTLP standard.

config:
  receivers:
    jaeger: null
    prometheus: null
    zipkin: null
  service:
    pipelines:
      traces:
        receivers:
          - otlp
      metrics: null
      logs: null

ports:
  otlp:
    enabled: false
  otlp-http:
    enabled: true
    containerPort: 4318
    servicePort: 4318
    hostPort: 4318
    protocol: TCP  
  jaeger-compact:
    enabled: false
  jaeger-thrift:
    enabled: false
  jaeger-grpc:
    enabled: false
  zipkin:
    enabled: false
  metrics:
    enabled: false

I’ve added a section about ports. We want to disable all the ports that our collector will not use, so that kubernetes won’t open them on the container. In real life, it took hours of pain to figure out that this is necessary, and I’m not gonna walk you through that part.

Deploy the new config by Iterating on configuration.

The log is shorter than last time, with nothing about metrics. It still has the key line that I care about: 
2022-07-07T21:14:16.598Z info otlpreceiver/otlp.go:88 Starting HTTP server on endpoint 0.0.0.0:4318 {"kind": "receiver", "name": "otlp"}
The collector is listening on 4318, the standard port for traces over HTTP.

Step 5: Expose the collector to the world

The collector is listening for traces, but only from inside the cluster. It has a Kubernetes service; you can find it in the list with kubectl get services. Its type is ClusterIP. We want more! We want an external IP address and URL.

In values.yaml, tell the Helm chart to create a service with a LoadBalancer:

service:
type: LoadBalancer

Deploy the new config by Iterating on configuration

Now look for updated service information:

kubectl get services

Expect your collector’s name, with a type of LoadBalancer. Mine looks like this: 

collectron-opentelemetry-collector LoadBalancer 10.100.172.59 a0d6fba93229443528a02f5a8a1414c7-1962312631.us-east-1.elb.amazonaws.com 4318:32561/TCP 3h49m

The External-IP field contains a URL. Let’s try it! Cut and paste your URL. I’m gonna shorten mine for exposition.

curl -i http://blahblah.us-east-1.elb.amazonaws.com:4318

This should get you a 404. 

Be sure to use protocol http and port 4318.

(Troubleshooting: The collector doesn’t respond at its URL; the connection hangs.)

Getting https to work is a whole different adventure, not in scope for this post. Yes, lack of https will cause problems in some clients. No, http is not acceptable for production. But it might be enough for today.

Step 6: Send it something (anything)

Let’s work that curl command up to a valid trace. If you already have a setup for sending traces, use that instead, and skip to [Step 8]. If getting new errors over and over doesn’t make you happy, skip to Step 7.

Hitting the right URL and port, with -i to show the results:

curl -i http://blahblah.us-east-1.elb.amazonaws.com:4318

Leads to 404, Page not found:

HTTP/1.1 404 Not Found
Content-Type: text/plain; charset=utf-8
Vary: Origin
X-Content-Type-Options: nosniff
Date: Tue, 12 Jul 2022 18:17:57 GMT
Content-Length: 19

404 page not found

That’s because the URL should end with: v1/traces/

curl -i http://blahblah.us-east-1.elb.amazonaws.com:4318/v1/traces

This gets us a new error. Small victories!

HTTP/1.1 405 Method Not Allowed
Content-Type: text/plain
Vary: Origin
Date: Tue, 12 Jul 2022 18:19:19 GMT
Content-Length: 41

405 method not allowed, supported: [POST]

405, Method not allowed, because it receives traces in a POST. Send a POST.

curl -i http://blahblah.us-east-1.elb.amazonaws.com:4318/v1/traces -X POST

Hurray, a new error!

s -X POST.elb.amazonaws.com:4318/v1/traces
HTTP/1.1 415 Unsupported Media Type
Content-Type: text/plain
Vary: Origin
Date: Tue, 12 Jul 2022 18:21:08 GMT
Content-Length: 81

415 unsupported media type, supported: [application/json, application/x-protobuf]

415, Media type not allowed. Let’s tell it we’re sending JSON, using a Content-Type header.

curl -i http://blahblah.us-east-1.elb.amazonaws.com:4318/v1/traces -X POST -H "Content-Type: application/json"

Another new error! Yes!

HTTP/1.1 400 Bad Request
Content-Type: application/json
Vary: Origin
Date: Tue, 12 Jul 2022 18:23:14 GMT
Content-Length: 26

{"code":3,"message":"EOF"}

400 Bad request, with EOF for End of File. It wants us to send it some data. Put a minimal JSON object in the request body with -d '{}'.

curl -i http://blahblah.us-east-1.elb.amazonaws.com:4318/v1/traces -X POST -H "Content-Type: application/json" -d '{}'

Victory! we get 200 OK!

HTTP/1.1 200 OK
Content-Type: application/json
Vary: Origin
Date: Tue, 12 Jul 2022 18:24:47 GMT
Content-Length: 2

{}

In the collector logs, you’ll see… nothing. Well, we didn’t really send it anything, so that’s fair.

What if we send it something nonempty? Try -d '{"name": "jess was here"}'

curl -i http://blahblah.us-east-1.elb.amazonaws.com:4318/v1/traces -X POST -H "Content-Type: application/json" -d '{"name": "jess was here"}'
HTTP/1.1 400 Bad Request
Content-Type: application/json
Vary: Origin
Date: Tue, 12 Jul 2022 18:31:09 GMT
Content-Length: 77

{"code":3,"message":"unknown field \"name\" in v1.ExportTraceServiceRequest"}

Hmm, a 400 Bad Request with “unknown field.” Fair. Time to send it legitimate trace data.

Step 7: Send a span for testing

The collector is listening for traces. Traces are made of spans, so let’s send it one span.

Here’s a valid message in OTLP format that the collector can accept. Put this in a file called span.json

{
"resourceSpans": [
{
"resource": {
"attributes": [
{
"key": "service.name",
"value": {
"stringValue": "test-with-curl"
}
}
]
},
"instrumentationLibrarySpans": [
{
"instrumentationLibrary": {
"name": "instrumentatron"
},
"spans": [
{
"traceId": "71699b6fe85982c7c8995ea3d9c95df2",
"spanId": "3c191d03fa8be065",
"name": "spanitron",
"kind": 3,
"droppedAttributesCount": 0,
"events": [],
"droppedEventsCount": 0,
"status": {
"code": 1
}
}
]
}
]
}
]
}

This is pretty much the minimum. (Thanks Lightstep for the example.)

It looks like a lot for one span. See, first it groups spans by resource–what produced this telemetry?)—with a service.name. Then it groups them by instrumentation library–whose code produced this telemetry?– and its name winds up in library.name.

Note that the service.name here will become the dataset name in Honeycomb, in the next step.

Put it in a file called and span.json and send it to your collector:

curl -i http://blahblah.us-east-1.elb.amazonaws.com:4318/v1/traces -X POST -H "Content-Type: application/json" -d @span.json

The collector should be happy, and say very little:

HTTP/1.1 200 OK
Content-Type: application/json
Vary: Origin
Date: Tue, 12 Jul 2022 18:44:50 GMT
Content-Length: 2

{}

The collector logs also say nothing. It doesn’t report normal functionality. If you’d like to see it, see [Troubleshooting: get the collector to report what it receives]

Step 8: Send data to Honeycomb

The collector is now receiving traces, but it doesn’t have anything to do with them. We disabled all the exporters. Let’s teach it to send these traces to Honeycomb.

The Honeycomb docs have details, but here’s the part we need: Add an OTLP exporter, which sends traces over gRPC. Point it to the Honeycomb endpoint api.honeycomb.io:443 and include a header x-honeycomb-team with your API key. Then add that exporter to the traces pipeline.

Here, I’ve named the exporter “otlp/honeycomb.” The part before the slash tells the collector which kind of exporter to use (the OTLP one is included in the collector); the part after the slash is a distinguishing name.

The bolded parts are new:

config:
exporters:
otlp/honeycomb:
endpoint: api.honeycomb.io:443
headers:
"x-honeycomb-team": <your honeycomb API key here>

receivers:
jaeger: null
prometheus: null
zipkin: null
service:
pipelines:
traces:
receivers:
- otlp
exporters:
- otlp/honeycomb
metrics: null
logs: null

Deploy the new config by Iterating on configuration.

Test it by sending a span. Check the collector log for errors. (Troubleshooting: Honeycomb returns an error about a missing dataset header.)

Step 9: See your span in Honeycomb

Finding a particular test span in Honeycomb might look different depending on your setup. This should take about ten seconds (after you’re logged in). The video below includes troubleshooting for other cases. If you don’t want a video, [Skip ahead to the text description](See your span in Honeycomb (common case)).
VIDEO:

If this is working, [skip to the next step]().

See your span in Honeycomb (common case):

Log in to honeycomb.
Click New Query, the magnifying glass.
Check your dataset: usually it’s the same as service.name in the span. The example uses “test-with-curl.” Change the dropdown selection if it shows something different.
Click Run Query.

See a grid of Raw Data. The most recent spans are at the top. Is yours there?

If that didn’t work, try (Troubleshooting: I can’t find my event in Honeycomb.)

Step 10: Put the API Key in a Secret (optional)

The collector is working now. There are plenty of tweaks possible, but only one of them is screaming at me in urgency.

With my Honeycomb API Key in values.yaml, I can’t even commit that file to git. Not OK! I put a lot of work into this and I want to save it, without worrying about exposing secrets.

If this isn’t bothering you, skip to the next step.

The collector should be getting the API key from an environment variable, and that environment variable should be populated by Kubernetes from the value in a secret. Let’s arrange that.

First, replace the API key with an environment variable reference, "${HONEYCOMB_API_KEY}"

config:
exporters:
otlp/honeycomb:
endpoint: api.honeycomb.io:443
headers:
"x-honeycomb-team": "${HONEYCOMB_API_KEY}"

Second, get Kubernetes to populate that environment variable. The Helm chart supports this in a section called extraEnvs: . Add this to values.yaml at the top level:

extraEnvs:
- name: HONEYCOMB_API_KEY
valueFrom:
secretKeyRef:
name: honeycomb-api-key-for-frontend-collector
key: api-key

Here, you might choose a different name for your secret. Mine is explicit: “honeycomb-api-key-for-frontend-collector”.

Third, create the secret in kubernetes. From the command line, that’s:

kubectl create secret generic honeycomb-api-key-for-frontend-collector --from-literal=api-key=YOUR_API_KEY_HERE

Check that the secret exists with k get secrets. (You might be surprised by the number of secrets in that list; helm creates one every time you upgrade the collector installation. Whatever, helm.)

Your secret might look like this: `honeycomb-api-key-for-frontend-collector Opaque 1 4m12s`

Deploy the new config by Iterating on configuration.

Before you send the test span again, change the trace ID inside it. Increment the last digit to make it different. When Honeycomb receives the same trace ID and span ID twice, it doesn’t know how to display that.

 "traceId": "71699b6fe85982c7c8995ea3d9c95df3",

[Check that the new span arrived in Honeycomb.](See your span in Honeycomb (common case):) 

If you change your mind about which API key to use, see Troubleshooting: [I want to change the API Key in my secret.]

Good job, your YAML is safe again.

Step 10: Enable CORS

This step is technically optional, but you’ll need it to receive spans from a browser app. 

CORS is a browser protocol that tries to prevent unfriendly websites from hitting your backend. Before the browser sends a POST with trace data, it will send an OPTIONS request to ask permission. By default, the collector will respond like “Oh, no, I do not want any data except from my own domain of blahblah.us-east-1.elb.amazonaws.com:4318″ which is not useful at all.

To change, this, pass CORS options to the OTLP receiver in the collector config. This makes the collector accept POSTS from anywhere:

config:
receivers:
otlp:
protocols:
http:
cors:
allowed_origins: "*"

jaeger: null
prometheus: null
zipkin: null

That doesn’t work for every front end. In the Honeycomb UI, our page content asks the browser to disregard OPTIONS responses for * (any site). Our site wants explicit permission. 

Here’s an example configuration that is more specific about where traces can come from:

config:
receivers:
otlp:
protocols:
http:
cors:
allowed_origins:
- "http://localhost:8080"
- "https://ui.honeycomb.io"

jaeger: null
prometheus: null
zipkin: null

I hope this helps.

Step 11: Get a little more specific (optional)

Here are a couple other tweaks that I recommend for values.yaml.

Specify the collector version. By default, you’ll get whatever the helm chart was most recently updated to use. It’s better to specify a tag, so that you know which one you’re getting. Pick the most recent one from the opentelemetry-collector Docker repository. Add it to your values.yaml like this:

image:
tag:
0.55.0

Note that you do not want the “latest” tag. I’ve heard that the project maintainers opt not to move that with new releases.

Specify the processors in the pipeline. It’s explicit this way; you see the receiver, processors, and exporter. It probably merges to the same effect, but yaml merge can be fragile. I prefer this:

   pipelines:
traces:
receivers:
- otlp/anyorigin
processors: [memory_limiter, batch]
exporters:
- otlp/honeycomb

Conclusion

Now we have an OpenTelemetry collector running in k8s, accepting OTLP traces over HTTP/JSON, and sending them to Honeycomb over gRPC.

Try sending traces to it from your client. Honeycomb has docs about how to do this from the browser. Also check OpenTelemetry.io, where there is a client for Swift on iOS and Java on Android, among many others.

Troubleshooting:

Did the collector receive my spans?

By default, the collector doesn’t log about normal operations. We can change that. (docs)

In the config section of values.json, add a service called telemetry (meaning, the collector’s own telemetry):

config:
  receivers:
    jaeger: null
    prometheus: null
    zipkin: null
  service:
    telemetry:
      logs:
        level: "debug"
    pipelines:
      traces:
        receivers:
          - otlp
      metrics: null
      logs: null

Deploy the new config by Iterating on configuration. When you get a terminal tailing the logs (with kubectl logs -f), leave that open.

Next, send a test span.

The collector logs print something like this:

2022-07-07T22:21:40.042Z    INFO    loggingexporter/logging_exporter.go:43    TracesExporter    {"#spans": 1}
2022-07-07T22:21:40.042Z DEBUG loggingexporter/logging_exporter.go:52 ResourceSpans #0
Resource SchemaURL:
Resource labels:
-> service.name: STRING(test-with-curl)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope instrumentatron
Span #0
Trace ID : 71699b6fe85982c7c8995ea3d9c95df2
Parent ID :
ID : 3c191d03fa8be065
Name : spanitron
Kind : SPAN_KIND_CLIENT
Start time : 1970-01-01 00:00:00 +0000 UTC
End time : 1970-01-01 00:00:00 +0000 UTC
Status code : STATUS_CODE_OK
Status message :

Now you can tell more of what’s going on.

My pod didn’t restart after the upgrade.

If your upgrade did not modify the collector config, then maybe it didn’t need to restart the pod. For instance, changing the service to LoadBalancer doesn’t need a pod restart.

For everything else: check the output of helm upgrade. Maybe there is an error message.

My pod stays in PENDING status forever.

Try:
kubectl describe pod <pod name>
This prints a lot more information, including why the pod is still pending.
In my case, the output included

Warning FailedScheduling 16s (x105 over 105m) default-scheduler 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory

All my nodes were full, so I added another one. Poof! Pod started!

My pod status is CrashLoopBackoff.

Something’s going wrong. Use kubectl logs to find out what.

Honeycomb returns an error about a missing dataset header.

Ah, you may be sending to a Classic environment. I recommend that you create a new environment, and then Honeycomb will create a dataset automatically. If you want to send to the old classic one, you’ll need an “x-honeycomb-dataset” header. Set that to a name of a dataset, like “my-favorite-k8s”.

The collector doesn’t respond at its URL; the connection hangs.

Is it the collector, or is it the load balancer?

Log in to the AWS console and check the health of your Elastic Load Balancer. Mine kept having no healthy instances, because the collector wasn’t responding to health checks (it said), because it was trying the wrong port, because I hadn’t disabled all the ports it wasn’t using.

Here’s how you can test the collector from inside the cluster:

Try sending a span from inside the cluster:
Here’s a spell to open bash inside the cluster: 

kubectl run test-pod --rm --restart=Never -i --tty --image ubuntu:20.04 bash

Then, you’ll need curl: apt update && apt install -y curl

Now you can try hitting the collector. But where is it? Ah, some k8s magic is here for you. Type env to list environment variables. 
COLLECTRON_OPENTELEMETRY_COLLECTOR_PORT_4318_TCP_ADDR contains an IP address!

Now try this: 

curl -i $COLLECTRON_OPENTELEMETRY_COLLECTOR_PORT_4318_TCP_ADDR:4318

you should get a 404 back, because that’s the wrong endpoint. It’s also the wrong method (we need POST), and it will want some data. Try this: 

curl -i $COLLECTRON_OPENTELEMETRY_COLLECTOR_PORT_4318_TCP_ADDR:4318/v1/traces -X POST -H "Content- "{}"

That will get you a 200 if the collector is working. Then you can be sure it’s a problem with ingress.

I can’t find my event in Honeycomb.

  • Check the collector logs for clues. For instance, with a bad API key, mine says 2022-07-08T16:33:35.114Z error exporterhelper/queued_retry.go:183 Exporting failed. The error is not retryable. Dropping data.{"kind": "exporter", "name": "otlp/honeycomb", "error": "Permanent error: rpc error: code = Unauthenticated desc = missing 'x-honeycomb-team' header", "dropped_items": 1}
  • Check that you’re looking in the environment that matches your API key. The environment selector is at the top left, just under the Honeycomb logo. Click it and choose “Manage Environments” to get a list of environments and the option to view their API keys. Then check those against the one you’re using.
  • In Honeycomb, if Recent Events doesn’t have your span, try looking at raw data. Click “New Query” on the left bar, then hit “Run Query.” (It’s a blank query.) That will take you to Raw Data. Is your event there? You can use browser search to find its trace ID.
  • If you’re trying to look at traces, watch out. If (like me) you sent the same span multiple times through curl, it sends the same trace_id over and over. That’s not how real traces work. Stick with Raw Data for this test.

I don’t remember what API Key I put in my secret.

Here’s a spell for you:

kubectl get secret honeycomb-api-key-for-frontend-collector -o jsonpath="{.data.api-key}" | base64 -d

I don’t remember what Honeycomb team this API key sends to.

I made an app for that: https://honeycomb-whoami.glitch.me

This calls the Honeycomb API to find out what team and environment that API key points to.

I want to change the API Key in my secret.

The easiest way to do this is: delete the secret, recreate the secret, and then restart the collector pod.

Delete the secret:
kubectl delete secret honeycomb-api-key-for-frontend-collector

Recreate the secret (my API key is in the APIKEY environment variable):
kubectl create secret generic honeycomb-api-key-for-frontend-collector --from-literal=api-key=$APIKEY

Next, find the collector’s pod name using kubectl get pods
and then delete the pod:

kubectl delete pod collectron-opentelemetry-collector-whatever-your-pod-name-is

Kubernetes will automatically restart the pod. See [Check that the collector is running]().