AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.
You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/
<function name>
.
AWS: Logging to Cloudwatch (Python)
Your Lambda function can contain logging statements. AWS Lambda writes these logs to CloudWatch. If you use the Lambda console to invoke your Lambda function, the console displays the same logs.
Cool Stuff in PostgreSQL 10: Auto-logging
We started off by creating a logging infrastructure, then arranging for a single table to use it.
Rather than repeat that work for each table, let’s use a relatively although not completely new feature:
EVENT TRIGGER
. The idea here is that we fire a trigger onCREATE TABLE
and see to it that the table is logged. We’ll write the trigger first, even though in reality, we’d need to load the function it calls first.
CREATE EVENT TRIGGER add_logger
ON ddl_command_end
WHEN tag IN ('create table')
EXECUTE PROCEDURE add_logger();
COMMENT ON EVENT TRIGGER add_logger IS 'Ensure that each table which is not a log gets logged';
The magic happens inside
add_logger()
, but it’s magic we’ve already seen. First, we’ll get the table’s name and schema usingpg_event_trigger_ddl_commands()
, filtering out tables which are already log tables. The test here is crude and string-based, but we could easily go to schema-based ones.