I have just started a very simple project to learn...
# http4k
n
I have just started a very simple project to learn http4k. Is there any built-in logging ? I ask because the app works fine when ran from my pc but fails when running in a container. Is there any log somewhere?
d
There is absolutely zero logging in the http4k codebase. If you want to see if your HTTP handlers are running then you can use a debug filter to show the exact HTTP traffic.
Copy code
fun main() {
    val app = { req: Request -> Response(Status.OK) }
    val decorated = DebuggingFilters.PrintRequestAndResponse().then(app)
    val response = decorated(Request(Method.GET, ""))
}
If you want to put logging in your server layer, then you can do that in whatever way is appropriate for your container.
more generally, we espouse the use of events over log frameworks: https://www.http4k.org/guide/howto/structure_your_logs_with_events/
n
I see ... thank you very much David
d
np 🙂
a
Is the lack of an slf4j logging filter due to intention or apathy? (read: do you want one?)
d
There are no logging frameworks in the library on purpose. Because they are all evil and bad. 😉
a
Lol. Got it
d
We enjoyed not having to patch log4shell when it came out, and we look forward to not patching the next one 😉 . TBH, we don't use logging frameworks at all if we can avoid it. :`:println` is your friend! 🙃
p
I don’t want to derail the question from Nicola here, but I gotta ask: How do you log with http4k? Is it structured logging as a JSON String to STDOUT and then interpret it in e.g. Kibana? How do you work with different structure in different services? E.g. you might have a column “event type” in one service, but maybe not in the other. Edit: to rephrase it: What do you use to interpret those logs? Just the idea of being able to simply filter for “OrderCreatedEvent” is great, but the log dashboard has to interpret it somehow.
a
I have a bag of tricks in my utils lib to add slf4j summary and error logging filters. To propagate trace ids, I have filters that integrate with MDC, and have my logger configured to write the MDC to each line. I need my logger to write to a file so that my ec2 daemon can send them to cloudwatch logs. From there, it's a simple matter of searching for patterns. We have so much throughput that we can't afford to send debug logs to cloudwatch regularly, so all we regularly send is a summary of the request and response code, as well as any errors.
j
@Philipp Mayer I can't speak for Dave, but this is where coordination across services teams comes into play. If everyone agrees that the events basic structure is the same, then dashboards become easy. you can make this as part of the contract that a platform offers in order to take on a service / allow it to be run. Once everyone knows that event name is "eventName" and event time is "eventTime", and severity, service, host etc etc are all in common places in the message making common dashboards becomes easier/possible. Then you standardise on particular specific events that all services must use for the same thing. Incoming Request -> event must be IncomingRequest, Server failed to start... same ... then ensure that business events across different services that mean the same thing must be the same. You can call it enterprise architecture, or just refactoring so that everyone does less work 🙂
p
Thanks for the input @James Richardson! That’s of coursethe ideal scenario 🙂
j
I've seen this a few times.. companies seem to think "oh great just connect up our log4j logs to datadog, or whatever, and all will be crystal clear" - you need to be thinking about these logs clearly to derive actionable info. Otherwise you're just shipping out a big mess and hoping. Sure, the search will be better, but what a waste! Changing to common format, structured logging, using types, and relating to "business or infrastructure things that happened" will enable you to 100x the reward of your log collector, and probably/almost certainly save you some GDPR mess too... No more
log.error("I am here")  //only error logs in prod
. ...
Not a criticism of you, of course! This needs action across an org or area to be meaningful, and often its very hard to get buy in.
p
The thing is: I agree with all of this and would love to see it, but change in a big legacy system with ivory tower architects is rather hard. 🙂 Hence starting small in one part of the company and then showing the results is easier in my experience. That’s why I was asking how to configure the log collector on a per-service basis
j
You may have already tried, but there are a few levers that can be useful 1) cost - unconstrained log output can be hugely expensive - you pay per byte usually. 2) cost for visibility - taking text output and reversing what happened from it, in dashboards, is hard and brittle. So much easier when log events correlate with actual real world events. No need for crazy regex everywhere. 3) visibility cost - using different things everywhere means that getting an overall picture can be impossible, or unreasonably expensive to implement 4) gdpr cost - unconstrained logging, relying on the toString of an object means names, addresses, emails, ip addresses and more can be accidentally shipped to your log provider. Maybe you need to tell the regulator. Removing oopses from indexes is annoying, and costly. ... I could go on!
p
Thanks for the thorough insights! I kind of pieced all of that together on my own (already looked at valuesk4k e.g.), but great to have it summed up. Really looking forward for the talk.
m
No log levels at all. If you need these to see what is going on then you have kind of failed 🙂
What's the problem with log levels?
197 Views