{ "version": "https://jsonfeed.org/version/1", "title": "Matt's Blog of Doom", "description": "", "home_page_url": "https://mattmofdoom.com", "feed_url": "https://mattmofdoom.com/feed.json", "user_comment": "", "author": { "name": "MattMofDoom" }, "items": [ { "id": "https://mattmofdoom.com/apps-and-library-updates-2023-edition/", "url": "https://mattmofdoom.com/apps-and-library-updates-2023-edition/", "title": "Apps and library updates (2023 Edition)", "summary": "I've been slack in maintenance and haven't had new feature ideas as I've been somewhat abstracted from development in my current role. Nonetheless, I have gone through and updated dependencies for all apps and libraries that make sense to update. The goal here is to keep things current, although I don't have much opportunity to test things like my various Seq apps. Nonetheless, all the Lurgle libraries are updated, and therefore apps that depend on them are also in turn updated. Here's the big long list of updated apps, below, with a couple of external apps not included that have…", "content_html": "
I've been slack in maintenance and haven't had new feature ideas as I've been somewhat abstracted from development in my current role. Nonetheless, I have gone through and updated dependencies for all apps and libraries that make sense to update. The goal here is to keep things current, although I don't have much opportunity to test things like my various Seq apps. Nonetheless, all the Lurgle libraries are updated, and therefore apps that depend on them are also in turn updated.
\nHere's the big long list of updated apps, below, with a couple of external apps not included that have had existing pull requests updated. Note that you should definitely test the Seq apps before using in production; I have updated their dependencies and allowed them to run through the CI/CD pipeline, but I simply have not tested them in any version of Seq. The apps that run as services outside of Seq are almost definitely fine to use - this is only for apps hosted within Seq itself.
\nThe Lurgle libraries continue to maintain existing compatibility, which means that for some framework versions, dependencies will be at older versions to allow their continued use. I don't have Visual Studio 2022 so I have not added .NET 6.0 and beyond support.
\nExcept for fixes in the dev build of Seq.Client.EventLog resulting from a pull request, there are no new features, although I have updated code where there were conventions changed by dependencies in the interim.
\nApp | \nDownload | \n
---|---|
Seq.App.EventTimeout Event Timeout for Seq | \n|
Seq.App.EventThreshold Event Threshold for Seq | \n|
Seq.App.EventSchedule Event Schedule for Seq | \n|
Seq.App.OpsGenieHeartbeat OpsGenie Heartbeat for Seq | \n|
Lurgle.Logging Standardised Serilog implementation with extra goodies! | \n|
Lurgle.Alerting Standardised FluentEmail implementation with extra goodies! | \n|
Lurgle.Transfer Standardised SSH.NET, FluentFTP, and SMBLibrary implementation with extra goodies! | \n|
Lurgle.Dates Standardised common date library for date parsing, expressions, and tokens! | \n|
Seq.Client.Reporter Seq Reporter - Email scheduled reports using queries from your Seq structured logs | \n|
Seq.Client.EventLog (Dev build) Enhancement to Seq.Client.EventLog that dynamically processes Windows event logs and sends them to Seq with all Windows event log properties as properties within structured events | \n|
\n Seq.App.EmailPlus-Enhanced This is likely orphaned as a result of the official app being updated, but maintained for anyone continuing to use it. \n | \n|
Seq.Client.WindowsLogins Look for successful interactive console and RDP logins and send them to Seq for alerting | \n|
Mailbox Reporter Windows service that collects email metadata from configured on-premise Exchange mailboxes and loads them to a SQL database for reporting purposes | \n|
NLB Manager Windows service that automatically manages Windows network load balancing (NLB) based on whether a service is started or stopped. | \n
", "author": { "name": "MattMofDoom" }, "tags": [ "Windows Logins", "Updates", "Seq.App.EmailPlus-Enhanced", "Seq", "Reporter", "OpsGenie", "NLB Manager", "Mailbox Reporter", "Lurgle.Transfer", "Lurgle.Logging", "Lurgle.Dates", "Lurgle.Alerting", "Lurgle", "Heartbeat", "EventX Trilogy", "EventLog", "Event Timeout", "Event Threshold", "Event Schedule", "C#", "Apps" ], "date_published": "2023-07-04T12:47:54-07:00", "date_modified": "2023-07-04T13:01:38-07:00" }, { "id": "https://mattmofdoom.com/something-something-adventure-update/", "url": "https://mattmofdoom.com/something-something-adventure-update/", "title": "Something something adventure update", "summary": "I've been very quiet on the blog front for some time, and I admit my updates to software have been lacking. I've been on this adventure for a year now, and it's occupied a lot of my attention. I'd hate, however, to leave this blog neglected and unloved indefinitely! So an update ... I love this adventure. I've been really enjoying Seattle, my team is awesome, and in a year I've progressed from landing with essentially nothing to buying my second car and own home, thanks to very carefully building credit score and taking opportunities as they arose! The adventure…", "content_html": "
I've been very quiet on the blog front for some time, and I admit my updates to software have been lacking. I've been on this adventure for a year now, and it's occupied a lot of my attention. I'd hate, however, to leave this blog neglected and unloved indefinitely!
So an update ... I love this adventure. I've been really enjoying Seattle, my team is awesome, and in a year I've progressed from landing with essentially nothing to buying my second car and own home, thanks to very carefully building credit score and taking opportunities as they arose!
The adventure is not without its challenges. I didn't note it before, but I broke my leg early in the rush to get here, and it never healed properly. Hobbling around Seattle on foot was painful at best, and when my wife visited for the first time, it was a deciding factor in buying the first car, although it simply wasn't big enough to accomodate my family when they are here. As to the leg, well, I have a cane to walk anda car to drive, and it's likely as healed as it will ever get. There was no option for surgery although perhaps I can revisit that down the track. I'm incredibly privileged to have outstanding healthcare here, and although I'm conscious that that's not uniformly the case across the American health system, I certainly find my overall access to healthcare is far better than in Australia.
Being away from my family is difficult, but they have visited progressively over the year - which is cool, it's been a chance to open their eyes to the possibilities that lie beyond staying in the same place for your entire life! My wife and two of our children will finally relocate later this year, and I can't wait! Time zones are the most difficult thing to navigate, and it means I often only have a few hours a day to connect with my wife and kids... but we do manage.
Being at Amazon has been an incredible privilege so far, and I have a great sense of pride in the very visible and meaningful impact that my efforts have. There have been highs and lows to navigate and lead my team through, but overall I think I'm doing okay.
I do have a few updates to make to my libraries and apps - various dependencies and at least one pull request to merge - which I hope to get to soon. I hope anyone reading this will understand that it's taken a bit of a backseat as I navigated a new life in a new country with a new job at a whole new scale!
Note: Opinions are my own. I make no representation on behalf of any other entity.
\nUpdate: In due fairness to Unisys, and mentioned in a reply from them, I emailed notifications@unisys.com rather than notification@unisys.com and hence the bouncing email that I originally referred to. My mistake, there. I have proactively corrected the record below, of my own volition.
\nSo data breaches seem to be all the rage nowadays. They happen. I've stood by without comment on a number, including the Medibank breach that my data seemingly may have been wrapped up in. I've largely respected Medibank's effort towards transparency in that. I do not consider myself to be an expert in cyber security, nor in data breaches, and I'm certainly not in the habit of calling out former employers as a rule. There's still people working at my former employer who I'd like to work with again. But the rule, in this case, seems to need an exception.
\nUnisys this week disclosed (via email) to former associates (ex-employees) that their data has been affected by a breach, by virtue of being improperly stored in a location accessible to the public. This data was confirmed stolen, according to the email. And here is where my problem comes in.
\nThe email is a case study in crisis comms, run through PR and legal for the purpose of watering down and de-emphasising the message. \"Your data may have been breached\" is certainly the most important message. Following on from that ... a statement about how it does not create any material risk, and a distinct and nearly complete lack of next steps or action???
\nSo - let me assure you that Unisys internal security training does not make any distinction between full name, personal email, personal phone numbers, or \"more sensitive\" personally identifiable information (PII). All of it is bad, and there absolutely is no basis for Unisys to make a determination of what risk it poses to the people exposed. I'm still fuming on that.
\nMore seriously, \"contact us if you want to find out if your data was disclosed\" is the underlying call to action for the email and attached FAQ. Completely unacceptable. So I replied with what seemed like some reasonable steps that certainly seem to model responses I've seen such as Medibank's comms, as well as some good practices that could help to restore some trust.
\nInitially I mistakenly asserted that notification@unisys.com was bouncing. That was incorrect and I had been messaging notifications@unisys.com without noticing. My bad, and Unisys have also added notifications@unisys.com as an email alias - but I remain in disagreement that former associates should need to email to find out if their data (and what) was breached.
\nThere's a clear obligation towards the various jurisdictions in which Unisys operate their business, to their customers, and to the people whose data was exposed - regardless of how \"minor\" Unisys consider this to be. I still maintain that the reasonable steps in my original reply should be taken to restore some level of trust.
\nDisclaimer: The email reproduced below contains information that was sent to external recipients (including myself) by Unisys who I am no longer associated with. The confidentiality of this information relates to the disclosure of personal data - and very likely my own based on context - which should reasonably be considered appropriate for the recipient(s) to disclose and express opinions about. I do not have any further information or details on the the breach beyond what has been communicated. It is unlikely that the disclaimer at the end of this email trail applies to my post, but I would clarify that this post is entirely personal opinion.
\n(Note - Incorrect replies re bounce removed, in fairness to Unisys)
\nFrom: Matt
Sent: Thursday, 3 November 2022 13:42
To: ~Mat Newfield, Chief Information Officer <MatNewfieldCIO@unisys.com>; notifications@unisys.com <notifications@unisys.com>
Subject: Re: Incident Notice
Hi,
\nAs you should be well aware, data such as full name, personal phone, and personal email are PII and absolutely do create material risk of identity theft and fraud by virtue of being personally identifiable information. At best, it is disingenuous to suggest otherwise, and \"you should be vigilant against phishing\" is not enough. Equally - it is not on associates to contact you to find out, you must notify affected parties and disclose the specific data that was contained in their record. \"We do not believe\" is filled with good intentions and it's not up to you to decide if this could harm someone.
\nSo - since you have not set out clear next steps, allow me to strongly suggest some:
\nPlease note that I deeply understand that data breaches can happen, but I am concerned with your approach and reaching out to offer you the opportunity to correct it.
\nThanks,
\nMatt
\nFrom: ~Mat Newfield, Chief Information Officer
Sent: Thursday, November 03, 2022 10:33
Subject: Incident Notice
\n
| \n
It's been a while since I've added functionality to Lurgle libraries, but I played around with adding AWS Cloudwatch support to Lurgle.Logging. This affords further opportunities to use Lurgle.Logging as a common logging library with all its 'baked in' benefits. To do this, I leveraged the 'official' AWS.Logger.Serilog sink. Puzzlingly, it looks like this might not support use via proxy servers, but otherwise I'm quite happy with it.
\nAs with all of the logging targets, there are two ways to readily configure this;
\n1. Using app.config values to enable the Aws sink and then configuring the relevant values. As a rule, LogAwsLogGroup and LogAwsRegion are mandatory.
\n\n<add key=\"LogType\" value=\"Console,File,Aws\" />
<add key=\"LogLevelAws\" value=\"Verbose\"/>
<!--== AWS Cloudwatch configuration-->
<add key=\"LogAwsProfile\" value = \"\" />
<add key=\"LogAwsProfileLocation\" value=\"\"/>
<!-- Only use AWS Key and Secret for testing - best practice to use the AWS Profile -->
<add key=\"LogAwsKey\" value=\"\" />
<add key=\"LogAwsSecret\" value=\"\" />
<!-- Required to direct logs to the correct log group and region -->
<add key=\"LogAwsLogGroup\" value=\"Blah\" />
<add key=\"LogAwsRegion\" value=\"us-east-1\" />
<add key=\"LogAwsCreateLogGroup\" value=\"true\" />
<!-- Optional stream prefix and suffix-->
<add key=\"LogAwsStreamPrefix\" value=\"\"/>
<add key=\"LogAwsStreamSuffix\" value=\"\"/>
2. Using a constructor
\nLogging.SetConfig(new LoggingConfig(Logging.Config, logAwsLogGroup: \"Blah\", logAwsRegion: \"us-east-1\", logAwsCreateLogGroup: true));
\nAs a rule of thumb, LogAwsLogGroup and LogAwsRegion should be treated as mandatory.
\nYou should use the LogAwsProfile method of providing credentials per Configure AWS credentials - AWS SDK for .NET (amazon.com), rather than the LogAwsKey/LogAwsSecret keys. These configuration keys exist only for testing purposes - don't use them in production.
\nFor output, I diverged from using the compact JSON format that Serilog defaults to, and used a rendered JSON format that results in output like this:
\n\n\n
This preserves the structured properties that are output by Lurgle.Logging, but accomodates the lack of message template coverage in Cloudwatch.
\nThis results in Cloudwatch's ability to return the properties in Logs Insights, for example:
\n\n\n
Which should then translate through any functionality that can leverage the properties.
\nThis is a fairly straightforward addition that seems to work quite happily. We can iterate further on this if it proves necessary - but I'm certainly happy to have this up and running.
\nThis is available on Nuget now!
\n\n", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Serilog", "Lurgle.Logging", "Lurgle", "Cloudwatch", "C#", "Apps", "AWS" ], "date_published": "2022-09-19T08:35:57-07:00", "date_modified": "2022-09-19T08:53:35-07:00" }, { "id": "https://mattmofdoom.com/lurgletransfer-updated-to-address-underlying-cve-other-updates/", "url": "https://mattmofdoom.com/lurgletransfer-updated-to-address-underlying-cve-other-updates/", "title": "Lurgle.Transfer updated to address underlying CVE, other updates", "summary": "I haven't had a lot of time lately to work on the various apps and libraries I've created or contributed to, but a vulnerability in SSH.NET needed attention in Lurgle.Transfer. This vulnerability was noted here, and required an update to dependencies. That's done now, with all other dependencies updated. The newest version of the FluentFTP library required some code changes as a result of references being renamed. As a rule I like to keep my dependencies up to date, and that's one reason why Lurgle.Alerting forks some underlying FluentEmail renderers and the MailKit sender. That allows us to keep up…", "content_html": "
I haven't had a lot of time lately to work on the various apps and libraries I've created or contributed to, but a vulnerability in SSH.NET needed attention in Lurgle.Transfer. This vulnerability was noted here, and required an update to dependencies. That's done now, with all other dependencies updated. The newest version of the FluentFTP library required some code changes as a result of references being renamed.
\nAs a rule I like to keep my dependencies up to date, and that's one reason why Lurgle.Alerting forks some underlying FluentEmail renderers and the MailKit sender. That allows us to keep up to date.
\nI spent a bit of time this weekend to go through most of the libraries, Seq apps, and client apps & services to make sure they were up to date with the latest version of my own and third party libraries. There were no other vulnerabilities seen but it's not a bad practice to be up to date.
\nYou can check the Apps page to see if a newer version is available, but of course as most of the apps are either libraries or Seq apps, Nuget will readily expose the latest versions.
\n", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Seq.App.EmailPlus-Enhanced", "Seq", "SSH.NET", "Reports", "Reporter", "OpsGenie", "NLB Manager", "NLB", "Lurgle.Transfer", "Lurgle.Logging", "Lurgle.Dates", "Lurgle.Alerting", "Heartbeat", "FluentFTP", "EventX Trilogy", "EventLog", "Event Timeout", "Event Threshold", "Event Schedule", "Apps" ], "date_published": "2022-08-14T16:46:45-07:00", "date_modified": "2022-08-14T16:46:45-07:00" }, { "id": "https://mattmofdoom.com/an-unexpected-journey/", "url": "https://mattmofdoom.com/an-unexpected-journey/", "title": "An Unexpected Journey", "summary": "I've been quiet on the blog front lately, because an unexpected but exciting adventure landed in my lap in late January. I was approached by Amazon for a role, and I freely admit I initially thought the email was spam or phishing! After realising it was very much real, I jumped on the opportunity with gusto. Through the month of February, I went through the interview process for a role that was initially Sydney based. This entailed a couple of screening interviews, along with the Amazon Loop interview, also known as the onsite interview. That is a series of interviews…", "content_html": "
\n |
I've been quiet on the blog front lately, because an unexpected but exciting adventure landed in my lap in late January. I was approached by Amazon for a role, and I freely admit I initially thought the email was spam or phishing! After realising it was very much real, I jumped on the opportunity with gusto.
\nThrough the month of February, I went through the interview process for a role that was initially Sydney based. This entailed a couple of screening interviews, along with the Amazon Loop interview, also known as the onsite interview. That is a series of interviews over one or two days, which is now performed as an online video call.
\nI suppose I didn't really have the ego to think that I would be the type of person that Amazon might want, so at each stage of the interview I waited for the inevitable rejection email - which never happened. I kept going through to the next stage! Frankly, I really enjoyed the Loop, I liked the people and really enjoyed the conversations and the challenges posed. I won't say that I felt that I did great on every aspect - it's an interview and nerves can get in the way. I will say that once I realised that I needed to get out of my own way, I generally felt better about how I did.
\nTo my surprise, I was told that I had an 'inclined to offer' result. I did not, however, get offered the Sydney role. Being unused to the Amazon process, I somewhat misunderstood this outcome until it was explained that I did, in fact, have a job at Amazon - I just didn't get the team that I'd interviewed for because they'd gone with another successful candidate, and we only needed to find the team that I'd like to work with.
\nIt took surprisingly little time to go through other roles and locations, and we landed on a role that was open in Seattle. I had a catchup with the hiring manager scheduled, and it was only part way through that discussion that I realised that it wasn't me convincing him on joining the team - it was him convincing me! I really didn't have any trouble in being convinced - it was an exciting role in an cool part of Amazon that I really decided I'd like to work in.
\nAlthough it meant relocating to Seattle by myself for some time, with my family staying in Australia - with a number of our children completing studies, moving into tertiary education, and employment - it was an amazing opportunity and incredible adventure that I jumped into with both feet.
\nRelocation has its challenges, and so through March and April I navigated these and jumped through many hoops. The final challenge was locking in my visa to allow me to get to the USA, and given the backlog and difficulty in securing Australian appointments, I was fortunate to be able to secure an appointment in Singapore, with a little help from a friend along the way.
\nAside from a small hiccup with Covid, I was able to get to the USA in May and dive into my role. I'm loving Seattle, the job, and the team. There's a lot to learn and get to grips with, but fundamentally I've landed in a role that engages me and makes me excited to get out of bed for!
\nIt certainly means my attention has been somewhat diverted from my open source efforts, but I don't intend to abandon it. Bear with me 😁
", "author": { "name": "MattMofDoom" }, "tags": [ "MattMofDoom", "Adventure" ], "date_published": "2022-06-19T14:05:52-07:00", "date_modified": "2022-11-06T15:52:18-08:00" }, { "id": "https://mattmofdoom.com/seqappemailplus-enhanced-available/", "url": "https://mattmofdoom.com/seqappemailplus-enhanced-available/", "title": "Seq.App.EmailPlus-Enhanced available!", "summary": "Email+ (aka HTML Email) I have had some pull requests open for a while for Seq.App.EmailPlus, which are quite large in scope and degree of change. This is because they are transformational in nature - adding fallback mailhosts, delivery using DNS resolution, fine-grained TLS control, enhanced logging, and adding a number of new envelope options - CC, BCC, ReplyTo, and Priority. It also allows specifying a plain text alternative body, which is useful for mail recipients who (for whatever reason) can't display HTML. I previously incorporated much of this into Lurgle.Alerting, because it makes this a full featured mail library…", "content_html": "I have had some pull requests open for a while for Seq.App.EmailPlus, which are quite large in scope and degree of change. This is because they are transformational in nature - adding fallback mailhosts, delivery using DNS resolution, fine-grained TLS control, enhanced logging, and adding a number of new envelope options - CC, BCC, ReplyTo, and Priority. It also allows specifying a plain text alternative body, which is useful for mail recipients who (for whatever reason) can't display HTML.
\nI previously incorporated much of this into Lurgle.Alerting, because it makes this a full featured mail library that can provide full features without needing to code them for individual apps. This was also a useful way to test the functionality outside of Seq.App.EmailPlus, to ensure that everything worked \"as advertised\".
\nThe pull requests are taking some time for the team to get to - understandably so, given their size. I finally hit production use cases for the enhancements though - we needed reliability in mail delivery (using the fallback mail hosts), priority, a reply to address, and as a nice-to-have, using the CC field.
\nThis pushed me to make a dev build available via Nuget, as \"HTML Email Enhanced Edition\".
\n\n
Features and Enhancements:
\n\n
Let's dig into this a bit further.
\nThe first set of enhancements for Email+ revolved around SMTP delivery - allowing for fallback mail hosts in the event that the primary SMTP host is unavailable, and allowing for delivery using DNS resolution of remote mailhosts (looking up the MX record for destination domains).
\nThe DNS delivery option also acts as a final fallback to the mail host if both are configured.
\nFollowing on from this, I integrated the ability to specify CC, BCC, and ReplyTo fields. These are useful to have available - I had a specific case where ReplyTo was essential - and I've made them support Handlebars syntax like the To field already did.
\nI also enhanced the TLS configuration. Did you know Seq supports dropdown lists if a field is an enum? I didn't, until Nick Blumhardt pointed it out ... but it made this an absolute cinch. I can simply list all options available from the underlying MailKit library - which allows resolving a known issue with the in-dev builds of Email+ that don't like self-signed or invalid certificates, because you can always specify None or another suitable behaviour.
\n\nThe next addition was the option for an alternate plain text template. This is a really straightforward addition. It's generally considered that well structured HTML emails will include an alternate plain text body. This makes it easy to do, by supporting Handlebars in the same way as the normal HTML body. If it's not configured - it's not used.
\n\nFinally, I got a bit funky with adding email priority. Here was an opportunity to allow a limited degree of interoperability with other apps, by optionally allowing a mapping between a structured property and the email priority, with a fallback/default priority if that mapping fails. This is a similar Key=Value comma-delimited string as has been implemented with apps such as Seq.App.OpsGenie and Seq.App.Atlassian.Jira, along with the EventX Trilogy apps.
\nObviously you can still specify just a static Priority (High, Normal, or Low) in the Email Priority or Property Mapping field. It will default to Normal if not configured.
\nBut if you want to get fancy - here's a sample of mapping @Level to email priority!
\n\nWho could forget the logging enhancement! It puzzled me that Email+ didn't actually log the fact that it sent an email, and I suspect in at least some cases that it may not actually log failures - I've only observed that in production where some emails simply didn't send at all, and I had no way to trace back to what happened. This should always be logged, and better yet - with good structured properties that tell you exactly what's going on! Failed to send after multiple tries? We'll keep all those errors. What was the email envelope and body? Got you covered.
\nThis builds an ability to monitor and alert on emails from Seq - something I've found incredibly valuable with all the other capabilities I've built in our implementation. You should see some of my cool dashboards!
\n\n\n
The main purpose for publishing this was both to allow testing of the features while the Email+ pull requests are pending review, but also to solve a real-world problem that I was facing. If you'd like to test the HTML Email Enhanced Edition, feel free - you can install it to Seq using the Nuget package ID Seq.App.EmailPlus-Enhanced, and it will happily run alongside an existing Seq.App.EmailPlus install! And, of course - happy to get any feedback!
\n\n \n
", "author": { "name": "MattMofDoom" }, "tags": [ "TLS", "Structured logging", "Seq.App.EmailPlus-Enhanced", "Seq", "MailKit", "Handlebars", "Email", "DNS", "C#", "Apps" ], "date_published": "2022-02-22T15:59:46-08:00", "date_modified": "2022-02-22T16:32:14-08:00" }, { "id": "https://mattmofdoom.com/lurglelogging-now-supports-splunk/", "url": "https://mattmofdoom.com/lurglelogging-now-supports-splunk/", "title": "Lurgle.Logging now supports Splunk!", "summary": "Lurgle.Logging until now has supported just the File, Windows Event Log, Seq, and Console log types. It was always the intent to extend this to other log types to support the overall intent of Lurgle - accelerating and enhancing structured logging in your projects by leveraging the excellent work of the Serilog community. I hadn't got around to adding Splunk, because Seq fulfilled our needs for structured logging. I love Seq, but if you already have Splunk, you can readily send structured logs here via a HTTP Event Collector configured for Json. If you had both, you could send to…", "content_html": "
Lurgle.Logging until now has supported just the File, Windows Event Log, Seq, and Console log types. It was always the intent to extend this to other log types to support the overall intent of Lurgle - accelerating and enhancing structured logging in your projects by leveraging the excellent work of the Serilog community.
\nI hadn't got around to adding Splunk, because Seq fulfilled our needs for structured logging. I love Seq, but if you already have Splunk, you can readily send structured logs here via a HTTP Event Collector configured for Json. If you had both, you could send to both simultaneously!
\nThis is really a straightforward addition. All it really took was to set up a Splunk instance on my machine for testing, and a small amount of code.
\nTo make use of Splunk as a log type is just as easy. If you're using app.config, you can simply specify;
\n\n<add key=\"LogSplunkHost\" value=\"https://splunk.domain.com/services/collector\" />
<add key=\"LogSplunkToken\" value=\"yourtokenhere\" />
and of course via a constructor:
\nLogging.SetConfig(new LoggingConfig(Logging.Config, logSplunkHost: \"https://splunk.domain.com/services/collector\", logSplunkToken: \"yourtokenhere\"));
\nAs with the other log types, there is a LogLevelSplunk setting that allows you to set the minimum log level for the Splunk sink.
\nI have set the Splunk sink to use the same proxy settings as Seq - so even if you don't use Seq, if you need to set proxy settings for your application to reach Splunk, use the LogSeqUseProxy, LogSeqProxyServer, etc, values.
\nThis is available on Nuget now!
I've recently worked to update dependencies in almost all of my apps, so there are quite a few new versions to check out. Many of these depend on one or more of the Lurgle libraries, so it's logical that when I update the libraries, I also update the apps.
\nAfter posting this, Lurgle.Logging was also updated with a quick enhancement that includes a new configuration, IncludeSourceFilePath. This boolean config allows the automatically captured source file path to be filtered to only the filename if set to False.
This can be set in the app.config or via the constructor, and will default to True if not set.
This is particularly useful if you want to include the SourceFile property in file logging!
In the case of Lurgle.Transfer - I have added SMB1, SMB2, and SMB3 as a TransferMode, using the cross-platform SMBLibrary for client access to Windows/Samba shares. This means that Windows integrated authentication isn't available (at least at this stage) - but it also means that this should work for non-Windows implementations.
\nThe addition of SMB to Lurgle.Transfer means that there is more capability around chaining multiple transfers. For example - download some files from a Windows share, download other files from SFTP, and then upload all files via FTP.
\nAs this is abstracted behind the FileTransfer.Connect(), FileTransfer.ListFiles, FileTransfer.DownloadFile(s), and FileTransfer.SendFile(s) calls, you can reliably process SFTP, FTP, or SMB transfers with the same calls.
\nFor the purpose of TransferMode.Smb2 and TransferMode.Smb3 - these are functionally the same when calling SMBLibrary, and are only provided to reflect that SMB3 shares are totally supported as well.
\nThe library supports the legacy Netbios over TCP (port 135) and the more common SMB over IP (port 445) ports, and this is controlled using the existing port configuration. For the purpose of the Lurgle.Transfer implementation, an invalid port initially defaulted to Netbios over TCP, but after further consideration, I've just pushed an update that will make it use SMB over IP.
\nA sample of implementation for an upload to \\\\SERVERNAME\\Upload\\smbtest using the constructor config method would be:
\n\nvar destinations = new Dictionary<string, TransferDestination>
{
{\"Test\",
new TransferDestination(name: \"Test\", transferType: TransferType.Upload,
transferMode: TransferMode.Smb3, destination: \"Test\", authMode: TransferAuth.Password, server: \"SERVERNAME\", port: 445, userName: \"FtpTest\", password: \"Password1\",
remotePath: \"upload\\smbtest\", sourcePath: \"C:\\\\Transfer\\\\Upload\", doArchive: true, archivePath: \"C:\\\\Transfer\\\\Archive\", archiveDays: 30)}
};
Transfers.SetConfig(new TransferConfig(transferDestinations: destinations));
Transfers.Init();
var ftransfer = new FileTransfer(destinations[\"Test\"]);
var files = Files.CompressFiles(ftransfer.TransferConfig, CompressType.Gzip);
ftransfer.Connect();
ftransfer.SendFiles(files.DestFiles, true, true);
ftransfer.Disconnect();
Files.DeleteCompressedFiles(ftransfer.TransferConfig, CompressType.Gzip, files.DestFiles);
Files.CleanArchivedFiles(ftransfer.TransferConfig);
Files.ArchiveFiles(ftransfer.TransferConfig);
As with all other transfers, the result of each operation can be captured and examined for monitoring, alerting, and logging.
\nFor Lurgle.Alerting, I examined the upstream dependencies from FluentEmail, and found that some were quite out of date with some pending pull requests to permit dependency updates where the code needed changes for Liquid templates.
\nTo bring all upstream dependencies up to date, I created a fork of FluentEmail and merged the required pull requests so that I could create a coherent build with up to date dependencies. The intent isn't to publish the separate fork, though. As I had previously done with FluentEmail.Handlebars to permit supporting other framework versions, I integrated the code for the following renderers and senders directly into Lurgle.Alerting:
\nThe intent isn't to diverge from the FluentEmail codebase, so I expect to revert this to using the FluentEmail codebase in future after the necessary changes are merged. I plan to send a fresh pull request with the dependency updates to FluentEmail for review.
\nThis does mean, however, that Lurgle.Alerting is now able to leverage the current releases of upstream dependencies such as RazorLight, Fluid, and MailKit.
\nAs we rolled around to February, I couldn't help but notice that Lurgle.Dates had a weakness if a specific day (eg. 31) was passed in a shorter month to the Dates.GetUtcDaysOfMonth or Dates.GetDaysOfMonth methods. The private GetDayType method would attempt to parse an invalid date and return an exception, rather than simply returning no match.
\nThis is typically an edge case since you can just pass \"last\" as a date expression if you want the last day of the current month included.
\nThis now behaves as expected - if you're using a day expression with a specific day (eg. 31) in February, GetDaysOfMonth will return without this invalid day.
\nIt was while performing dependency updates for these 3 that I noted the weakness in Lurgle.Dates, specifically via the unit tests for Event Schedule. This meant, of course, that an update to all 3 would be needed to incorporate both the dependency updates and consideration around the changed behaviour in Lurgle.Dates.
\nEssentially - the Seq config for Event Timeout, Event Threshold, and Event Schedule will permit you to pass 31 as an \"include\" day, but because the 31st February is invalid, this would have resulted in an application error. Following the change - if the only configured include day was 31, this would mean that on months where the 31st was invalid, the apps would have assumed \"ALL\" days should execute, rather than \"NONE\".
\nI changed the behaviour so that it will now detect that a value was configured in the Include Days of Month setting, but that no days of month were returned. If this should happen, the EventX apps will now calculate a day of month that falls in the past, so that the expected behaviour occurs (eg. the schedule will not execute within the current month).
\nThe apps page has all of my apps, with most of them having had simple dependency updates, and the Seq apps will automatically show up as having updates in Seq - but for the apps called out in this post, have some handy links!
\n\n
Lurgle.Transfer | \n|
---|---|
Lurgle.Alerting | \n|
Lurgle.Dates | \n|
Lurgle.Logging | \n|
Seq.App.EventTimeout | \n|
Seq.App.EventThreshold | \n|
Seq.App.EventSchedule | \n
I mentioned some time back that I had developed an internal app for critical file transfers. It's quite a useful application that has developed over time to support most of our critical file transfers. The functionality that it uses has been predominantly based around SFTP transfers using SSH.NET, and it has a number of capabilities, including:
\nThis internal application used a library that I had created to handle a majority of these functions. That library was well suited to become a 'common' transfer library for use in our other applications.
\nWhen I started work on the necessary conversion to create Lurgle.Transfer, I also reviewed older code that had been the predecessor of the transfer app. This code included a capability of configuring either FTP (using FluentFTP) or SFTP connections. Although SFTP using SSH is far preferable to FTP, it's true that there are times that FTP is the lowest common denominator and you need it. An example might be downloading from a third party FTP site and then uploading via SFTP over SSH.
\nHence - apps using Lurgle.Transfer can be configured to either use FTP or SFTP connections. The library abstracts the transfer methods so that the same calls can work for both FTP and SFTP, and return coherent results for either case.
\nAs the PDF conversion functionality uses a licensed library, and is really outside the scope of requirements for Lurgle.Transfer, I haven't included this in the conversion.
\nLurgle.Transfer uses a prescriptive approach for configuration using the TransferConfig and TransferDestination classes, and as with other Lurgle libraries, can be configured using constructors or via app.config.
\nTransferConfig is a simple class that contains only a few config key value pairs.
\n\n<add key=\"AppName\" value=\"Test\" />
<!-- Global paths that can be overridden on a per-destination basis-->
<add key=\"SourcePath\" value=\"C:\\Transfer\\Upload\" />
<add key=\"DestPath\" value=\"C:\\Transfer\\Download\" />
<add key=\"DoArchive\" value=\"false\" />
<add key=\"ArchivePath\" value=\"C:\\Transfer\\Archive\" />
<add key=\"ArchiveDays\" value=\"30\" />
<!-- Provides the key names for each destination, and the order to execute the transfers-->
<add key=\"TransferDestinations\" value=\"SftpUpload,SftpDownload,FtpUpload,FtpDownload\" />
AppName is a universal for Lurgle libraries. If using app.config, AppName will be read in for Lurgle.Logging, Lurgle.Alerting, and Lurgle.Transfer - or automatically determined from the executable.
\n\n
SourcePath, DestPath, DoArchive, ArchivePath, and ArchiveDays are all configurations that can be set here, as global values, and optionally overridden at a per-destination level.
\n\n
TransferDestinations has two functions. The first is to tell Lurgle.Transfer what TransferDestination configurations look for, and the second is to allow configuration of what order to execute them in. The format is a comma-delimited string, and if using app.config, each value should correlate to a group of keyvalue pairs that are prefixed with the same name.
\nFor example, a simple constructor-based configuration of an SFTP upload might look like:
\n\nDictionary<string, TransferDestination> destinations = new Dictionary<string, TransferDestination>
{
{\"Test\",
new TransferDestination(name: \"Test\", transferType: TransferType.Upload,
transferMode: TransferMode.Sftp, destination: \"Test\", authMode: TransferAuth.Password,
bufferSize: 262144, server: \"127.0.0.1\", port: 22, userName: \"TestUser\", password: \"TestPassword\",
remotePath: \"/upload\", sourcePath: \"C:\\\\Test\")}
};
Transfers.SetConfig(new TransferConfig(transferDestinations: destinations));
Transfers.Init();
while an app.config upload configuration might look like:
\n\n<add key=\"SftpUploadName\" value=\"Upload via SFTP\" />
<add key=\"SftpUploadTransferType\" value=\"Upload\" />
<add key=\"SftpUploadTransferMode\" value=\"SFTP\" />
<add key=\"SftpUploadAuthMode\" value=\"Password\" />
<add key=\"SftpUploadBufferSize\" value=\"262144\" />
<add key=\"SftpUploadServer\" value=\"192.168.1.181\" />
<add key=\"SftpUploadPort\" value=\"22\" />
<add key=\"SftpUploadUsePassive\" value=\"false\" />
<add key=\"SftpUploadRemotePath\" value=\"upload\" />
<add key=\"SftpUploadSourcePath\" value=\"C:\\Transfer\\Upload\"/>
<add key=\"SftpUploadUserName\" value=\"FtpTest\" />
<add key=\"SftpUploadPassword\" value=\"Password1\" />
<add key=\"SftpUploadRetryCount\" value=\"3\" />
<add key=\"SftpUploadRetryDelay\" value=\"10\" />
You will note that the app.config upload configuration has a prefix to the properties, and this is because the TransferDestination configuration is used to configure multiple destinations as a comma-delimited string. This is shown in the constructor-based configuration, where we have configured that the TransferDestination that is named Test will be used.
\nYou can specify as many TransferDestinations as you would like.
\nThe full set of properties available for TransferDestination is extensive. Here's an extract from the LurgleTest app that was created to provide a sample implementation.
\n\n<!-- Destination configurations-->
<add key=\"FtpUploadName\" value=\"Upload via FTP\" />
<add key=\"FtpUploadTransferType\" value=\"Upload\" />
<add key=\"FtpUploadTransferMode\" value=\"FTP\" />
<add key=\"FtpUploadAuthMode\" value=\"Password\" />
<!-- Used with Certificate or Both authmodes-->
<!--<add key=\"FtpUploadCertPath\" value=\"\"/>-->
<add key=\"FtpUploadBufferSize\" value=\"262144\" />
<add key=\"FtpUploadServer\" value=\"192.168.1.181\" />
<add key=\"FtpUploadPort\" value=\"21\" />
<!-- This will only affect FTP transfer modes -->
<add key=\"FtpUploadUsePassive\" value=\"false\" />
<add key=\"FtpUploadRemotePath\" value=\"upload\" />
<!-- Per destination overrides for Source, Dest, and Archive Path-->
<add key=\"FtpUploadSourcePath\" value=\"C:\\Transfer\\Upload\" />
<!-- Used for download-->
<!--<add key=\"FtpUploadDestPath\" value=\"C:\\Transfer\\Download\"/>-->
<!-- Unused for LurgleTest. These work with the Files.ArchiveFiles method-->
<!--<add key=\"DoArchive\" value=\"False\"/>
<add key=\"ArchivePath\" value=\"C:\\Transfer\\Archive\"/>
<add key=\"ArchiveDays\" value=\"30\"/>-->
<add key=\"FtpUploadUserName\" value=\"FtpTest\" />
<add key=\"FtpUploadPassword\" value=\"Password1\" />
<add key=\"FtpUploadRetryCount\" value=\"3\" />
<add key=\"FtpUploadRetryDelay\" value=\"10\" />
<!-- Only useful for debugging-->
<!--<add key=\"FtpUploadRetryTest\" value=\"false\" />
<add key=\"FtpUploadRetryFailAll\" value=\"false\" />
<add key=\"FtpUploadRetryFailConnect\" value=\"false\" />-->
<!--Enable and configure proxy-->
<!--<add key=\"FtpUploadUseProxy\" value=\"false\" />
<add key=\"FtpUploadProxyType\" value=\"Http\" />
<add key=\"FtpUploadProxyServer\" value=\"\" />
<add key=\"FtpUploadProxyPort\" value=\"8081\" />
<add key=\"FtpUploadProxyUser\" value=\"\" />
<add key=\"FtpUploadProxyPassword\" value=\"\" />-->
<!-- Unused for LurgleTest. These work with the Files.CompressFiles method -->
<!--<add key=\"FtpUploadCompressType\" value=\"gzip\" />
<add key=\"FtpUploadZipPrefix\" value=\"\" />-->
<!-- Unused for LurgleTest. These allow for an alert to be defined on success/failure, which can be used in conjunction with Lurgle.Alerting-->
<!--<add key=\"FtpUploadMailTo\" value=\"bob@builder.com\" />
<add key=\"FtpUploadMailToError\" value=\"bob@builder.com\" />
<add key=\"FtpUploadMailIfError\" value=\"true\" />
<add key=\"FtpUploadMailIfSuccess\" value=\"false\" />-->
<!-- Used with download. 0 means any age files, greater than 0 provides a limit to the age of files-->
<!--<add key=\"FtpUploadDownloadDays\" value=\"1\" />-->
These keys relate to functionality in the in-house transfer app, and are retained for compatibility.
\nWhile LurgleTest in the Lurgle.Transfer repository provides a sample of implementation, a more simplistic example that uses all functionality would be:
\n\nvar destinations = new Dictionary<string, TransferDestination>
{
{\"Test\",
new TransferDestination(name: \"Test\", transferType: TransferType.Upload,
transferMode: TransferMode.Ftp, destination: \"Test\", authMode: TransferAuth.Password,
bufferSize: 262144, server: \"127.0.0.1\", port: 21, userName: \"FtpTest\", password: \"Password1\",
remotePath: \"upload\", sourcePath: \"C:\\\\Transfer\\\\Upload\", doArchive: true, archivePath: \"C:\\\\Transfer\\\\Archive\", archiveDays: 30)}
};
Transfers.SetConfig(new TransferConfig(transferDestinations: destinations));
Transfers.Init();
var ftransfer = new FileTransfer(destinations[\"Test\"]);
var files = Files.CompressFiles(ftransfer.TransferConfig, CompressType.Gzip);
ftransfer.Connect();
ftransfer.SendFiles(files.DestFiles, true, true);
ftransfer.Disconnect();
Files.DeleteCompressedFiles(ftransfer.TransferConfig, CompressType.Gzip, files.DestFiles);
Files.CleanArchivedFiles(ftransfer.TransferConfig);
Files.ArchiveFiles(ftransfer.TransferConfig);
Of course, in this example we aren't paying attention to the result of anything, but it's worth noting that we provide functions for compression and cleanup of the compressed files, as well as archival of files and cleanup based on age of archived files.
\nThere is a lot of power and capability in Lurgle.Transfer, and the methods provide detailed results that can be used in your code logic and in logging and alerting.
\nDownload Lurgle.Transfer now, and check out the other Lurgles!
", "author": { "name": "MattMofDoom" }, "tags": [ "ZIP", "Upload", "Structured logging", "SSH.NET", "SSH", "SFTP", "Lurgle.Transfer", "Lurgle", "GZIP", "FluentFTP", "File Transfer", "FTP", "Download", "Destinations", "Compression", "Cleanup", "C#", "Archive" ], "date_published": "2022-01-21T17:00:00-08:00", "date_modified": "2022-01-22T16:52:00-08:00" }, { "id": "https://mattmofdoom.com/lurglealerting-v131-multiple-mail-hosts-dns-delivery-and-fine-grained-tls-options/", "url": "https://mattmofdoom.com/lurglealerting-v131-multiple-mail-hosts-dns-delivery-and-fine-grained-tls-options/", "title": "Lurgle.Alerting v1.3.1 - Multiple mail hosts, DNS delivery, and fine grained TLS options!", "summary": "I've recently been contributing some code to the Seq Email+ (Seq.App.HtmlEmail) repo to allow for enhanced capabilities, such as delivering email using DNS (by querying the MX record for domains) and using fallback mail hosts, along with adding To/CC/BCC and other options. These additions will, if merged, make for a more powerful Seq app. In the meantime, I've circled back to add capabilities to Lurgle.Alerting, to allow my FluentEmail implementation to use multiple mailhosts and DNS delivery, and also have fine-grained control over TLS (when using MailKit), using what I've learned by my contributions to Email+. The result is Lurgle.Alerting…", "content_html": "
I've recently been contributing some code to the Seq Email+ (Seq.App.HtmlEmail) repo to allow for enhanced capabilities, such as delivering email using DNS (by querying the MX record for domains) and using fallback mail hosts, along with adding To/CC/BCC and other options. These additions will, if merged, make for a more powerful Seq app.
\nIn the meantime, I've circled back to add capabilities to Lurgle.Alerting, to allow my FluentEmail implementation to use multiple mailhosts and DNS delivery, and also have fine-grained control over TLS (when using MailKit), using what I've learned by my contributions to Email+.
\nThe result is Lurgle.Alerting v1.3.1. This has a couple of breaking changes to the constructor if you use that to build your config, by virtue of adding the two new options.
\nThese are predominantly config-based additions:
\n\n<add key=\"MailHost\" value=\"mailhost1.domain.com,mailhost2.domain.com\"/>
<add key=\"MailUseDns\" value=\"true\"/>
<add key=\"MailTlsOptions\" value=\"Auto\" />
or in the constructor:
\nAlerting.SetConfig(new AlertConfig(Alerting.Config, mailHost=\"mailhost1.domain.com,mailhost2.domain.com\", mailUseDns: true, mailTlsOptions: TlsOptions.Auto));
\nIn short - the additions for MailHost mean that you can specify multiple hosts as a comma-delimited string. If delivery to one host fails, the next will be attempted, and so on. This means that you have inbuilt mail fallback capabilities.
\nIf MailHost is empty and MailUseDns is true, Lurgle.Alerting will deliver using DNS, by looking up the MX records for each To, CC, and BCC email address. It will attempt to deliver using each MX record found for each domain.
\nIf MailHost has entries and MailUseDns is true, then if all mail host deliveries fail, Lurgle.Alerting will then attempt delivery via DNS, for the ultimate in fallback capabilities!
\nI have moved Lurgle.Alerting from using FluentEmail.Core.Models.SendResponse
to a MailResult
class, which preserves the Successful, ErrorMessages, and MessageId properties. This should preserve compatibility in existing code, but now reflects the last delivery attempt. Lurgle now also provides MailResult.DeliveryType and MailResult.MailHost. These fields reflect the last delivery attempt's type and the mail server attempted.
\npublic class MailResult
{
/// <summary>
/// Overall Success / Failure
/// </summary>
public bool Successful { get; set; }
/// <summary>
/// Last send's delivery type
/// </summary>
public DeliveryType DeliveryType { get; set; }
/// <summary>
/// Last send's mail host
/// </summary>
public string MailHost { get; set; }
/// <summary>
/// Last send's ErrorMessages for backward compatibility
/// </summary>
public IList<string> ErrorMessages { get; set; }
/// <summary>
/// Last send's MessageId for backward compatibility
/// </summary>
public string MessageId { get; set; }
/// <summary>
/// List of all attempts
/// </summary>
public List<DeliveryAttempt> DeliveryAttempts { get; set; }
}
You can explore all delivery attempts using the MailResult.DeliveryAttempts list, which contains the result of each delivery attempt!
\nThe DeliveryType enum is as follows;
\n\npublic enum DeliveryType
{
/// <summary>
/// Delivery via mailhost
/// </summary>
MailHost,
/// <summary>
/// Delivery via mailhost fallback
/// </summary>
MailFallback,
/// <summary>
/// Delivery via DNS
/// </summary>
Dns,
/// <summary>
/// Delivery via DNS fallback
/// </summary>
DnsFallback,
/// <summary>
/// Delivery via Mailhost DNS fallback
/// </summary>
HostDnsFallback,
/// <summary>
/// N/A
/// </summary>
None = -1
}
and the DeliveryAttempt class in MailResult.DeliveryAttempts is;
\n\npublic class DeliveryAttempt
{
/// <summary>
/// Type of delivery
/// </summary>
public DeliveryType DeliveryType { get; set; }
/// <summary>
/// Host attempted
/// </summary>
public string MailHost { get; set; }
/// <summary>
/// Send response
/// </summary>
public SendResponse Result { get; set; }
}
MailTlsOptions offers finer grained control over that offered by the MailUseTls boolean's \"on or off\" functionality. It uses a TlsOptions enum which inherits the same values as MailKit's SecureSocketOptions enum. This allows TlsOptions to be referenced within code, without needing to import the MailKit.Security reference.
\npublic enum TlsOptions
{
/// <summary>
/// None
/// </summary>
None = SecureSocketOptions.None,
/// <summary>
/// Auto
/// </summary>
Auto = SecureSocketOptions.Auto,
/// <summary>
/// Implicit TLS
/// </summary>
SslOnConnect = SecureSocketOptions.SslOnConnect,
/// <summary>
/// Explicit TLS
/// </summary>
StartTls = SecureSocketOptions.StartTls,
/// <summary>
/// Optional TLS
/// </summary>
StartTlsWhenAvailable = SecureSocketOptions.StartTlsWhenAvailable
}
This will only work with MailKit, and setting MailTlsOptions will override MailUseTls. Lurgle.Alerting will also ensure that the correct behaviour is used if you configure TCP port 465 for SMTP:
\nswitch (MailTlsOptions)
{
case null when MailUseTls && MailPort == 465: //Implicit TLS
case TlsOptions.None when MailPort == 465:
case TlsOptions.Auto when MailPort == 465:
case TlsOptions.StartTlsWhenAvailable when MailPort == 465:
MailTlsOptions = TlsOptions.SslOnConnect;
break;
case null when MailUseTls:
MailTlsOptions = TlsOptions.StartTls; //Explicit TLS
break;
case null:
MailTlsOptions = TlsOptions.Auto;
break;
}
This enforces implicit TLS to be used in this scenario.
\nLurgle.Alerting is now even more powerful, and this builds on the effort to make common log, alert, and even date libraries using the Lurgle name. Download Lurgle.Alerting now, and check out the other Lurgles!
\n
", "author": { "name": "MattMofDoom" }, "tags": [ "TLS", "Razor", "MailKit", "Lurgle.Alerting", "Lurgle", "Liquid", "Handlebars", "FluentEmail", "Email", "DNS", "C#", "Apps" ], "date_published": "2021-11-13T19:03:15-08:00", "date_modified": "2022-01-22T16:29:37-08:00" }, { "id": "https://mattmofdoom.com/updating-seqclienteventlog-for-dynamic-properties-and-more/", "url": "https://mattmofdoom.com/updating-seqclienteventlog-for-dynamic-properties-and-more/", "title": "Updating Seq.Client.EventLog for dynamic properties and more!", "summary": "I have had a fair bit of mileage from Seq.Client.EventLog. It's a great little service that was quite reliable for the simple usages that I initially had. When it came to monitoring for user logins, my first port of call was here - I wound up needing to fork the code into what became Seq.Client.WindowsLogins, which in turn exposed some shortcomings in the way that EventLog.EntryWritten works (we hates it). The net effect was that I wound up with an extremely robust service that consistently and reliably detected new interactive logins, although I wound up decoupling the Seq Client for…", "content_html": "
I have had a fair bit of mileage from Seq.Client.EventLog. It's a great little service that was quite reliable for the simple usages that I initially had. When it came to monitoring for user logins, my first port of call was here - I wound up needing to fork the code into what became Seq.Client.WindowsLogins, which in turn exposed some shortcomings in the way that EventLog.EntryWritten works (we hates it). The net effect was that I wound up with an extremely robust service that consistently and reliably detected new interactive logins, although I wound up decoupling the Seq Client for Windows Logins from a fork of the EventLog client, simply because it had diverged so far.
\nI don't like taking without giving back if I can possibly help it though, and there were some interesting technical questions in my head. Exposing all the properties of a login event led me to contemplating whether I could realistically and dynamically expose the properties of any Windows event.
\nOne weekend, some musing out loud on Twitter became the basis for a code bash ... first post below, but it was quite a lengthy thread as I expressed my ideas out loud while coding😊
\n\n
\n\nSeq.Client.WindowsLogins has diverged a really long way from Seq.Client.EventLog. I've changed it from a fork to its own repository, while still acknowledging its DNA heritage. https://t.co/mmuRgbF2Gr
\n— MattM (of Doom) (@MattMOfDoom) October 8, 2021
\n\n
\n\n
And what eventuated was a fresh fork of Seq.Client.EventLog that took all the existing functionality, updated the code with the reliable EventLogWatcher.EventRecordWritten, and ported core functionality from Seq.Client.WindowsLogin. Then I innovated by using the Seq.Client.EventLog design to add new features.
\nHere's a summary from the pull request to merge this to the original codebase:
\nAnd here's the sample screenshot that I provided:
\n\n\n
What you're looking at above is an extremely well structured event that is dynamically built from the Windows event log properties, along with the native properties applied by Lurgle.Logging such as MethodName, LineNumber, etc. The entire message is templatable, and you can configure multiple listeners (even against the same log).
\nFor a final flourish - I explicitly added the Seq.Client.WindowsLogins functionality into Seq.Client.EventLog, so that one service can really fit all! I will probably retire the Seq Client for Windows Logins once this pull request is merged into the parent codebase
\nThis is really powerful, and services like this start lifting Seq into more SIEM-like functionality. I think it's really exciting, and an excellent way to expose more applications and functions to a powerful application logging server.
\nWhile this PR is pending merge, I have made dev builds available, with the latest linked below. The original is, of course, still available for download from Connor O'Shea's repo!
\n\n
Seq.Client.EventLog (Dev build) | \n
---|
I've already written up a couple of posts about using Seq.Input.MSSQL, but I wanted to share one more. If you use termservers (Remote Desktop Session Host) in your environment, integrated with a Remote Desktop Connection Broker, you can do something pretty funky - logging new logins and disconnects to Seq!
\nThis, of course, needs a view for connections, including calculations to turn database fields into valid timestamps ...
\n\nUSE [RDConnectionBroker]
GO
/****** Object: View [dbo].[TermServerConnects] Script Date: 21/10/2021 8:48:10 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE VIEW [dbo].[TermServerConnects] AS SELECT @@servername as ServerName
,target.Name AS TermServer
,target.Fqdn AS TermServerFqdn
,target.Netbios AS TermServerNetBios
,targetip.IpAddress AS TermServerIp
,pool.Alias AS Pool
,pool.DisplayName AS PoolDisplayName
,[UserId]
,[UserName]
,[UserDomain]
,[SessionId]
,dateadd(hh, datediff(hh, getutcdate(), getdate()), DATEADD(nanosecond,CreateTime % 600000000,
DATEADD(minute,CreateTime / 600000000, cast('16010101' as datetime2(7))))) AS CreateTime
,CASE WHEN [DisconnectTime] = 0 THEN NULL Else dateadd(hh, datediff(hh, getutcdate(), getdate()), DATEADD(nanosecond,DisconnectTime % 600000000,
DATEADD(minute,DisconnectTime / 600000000, cast('16010101' as datetime2(7))))) END AS DisconnectTime
,[InitialProgram]
,[ProtocolType]
,session.State
,[ResolutionWidth]
,[ResolutionHeight]
,[ColorDepth]
,'{UserDomain}\\{UserName} connected to {TermServer}' AS Message
FROM [RDConnectionBroker].[rds].[Session] session
LEFT JOIN [RDConnectionBroker].[rds].[Target] target ON session.TargetId = target.Id
LEFT JOIN [RDConnectionBroker].[rds].[TargetIp] targetip ON session.TargetId = targetip.TargetId
LEFT JOIN [RDConnectionBroker].[rds].[Pool] pool ON target.PoolId = pool.Id
WHERE Session.State <> 4
ORDER BY CreateTime DESC OFFSET 0 ROWS
GO
and a view for disconnections.
\n\n
\nUSE [RDConnectionBroker]
GO
/****** Object: View [dbo].[TermServerDisconnects] Script Date: 21/10/2021 9:14:05 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE VIEW [dbo].[TermServerDisconnects] AS SELECT @@servername as ServerName
,target.Name AS TermServer
,target.Fqdn AS TermServerFqdn
,target.Netbios AS TermServerNetBios
,targetip.IpAddress AS TermServerIp
,pool.Alias AS Pool
,pool.DisplayName AS PoolDisplayName
,[UserId]
,[UserName]
,[UserDomain]
,[SessionId]
,dateadd(hh, datediff(hh, getutcdate(), getdate()), DATEADD(nanosecond,CreateTime % 600000000,
DATEADD(minute,CreateTime / 600000000, cast('16010101' as datetime2(7))))) AS CreateTime
,dateadd(hh, datediff(hh, getutcdate(), getdate()), DATEADD(nanosecond,DisconnectTime % 600000000,
DATEADD(minute,DisconnectTime / 600000000, cast('16010101' as datetime2(7))))) AS DisconnectTime
,[InitialProgram]
,[ProtocolType]
,session.State
,[ResolutionWidth]
,[ResolutionHeight]
,[ColorDepth]
,'{UserDomain}\\{UserName} disconnected from {TermServer}' AS Message
FROM [RDConnectionBroker].[rds].[Session] session
LEFT JOIN [RDConnectionBroker].[rds].[Target] target ON session.TargetId = target.Id
LEFT JOIN [RDConnectionBroker].[rds].[TargetIp] targetip ON session.TargetId = targetip.TargetId
LEFT JOIN [RDConnectionBroker].[rds].[Pool] pool ON target.PoolId = pool.Id
WHERE session.State = 4
ORDER BY DisconnectTime DESC OFFSET 0 ROWS
GO
And then all that's required is a Seq.Input.MSSQL instance to read in each view, eg.
\nProperty | \nValue | \n
---|---|
Title | \nTermServerConnects | \n
Refresh every x seconds | \n60 | \n
Server instance name | \nSERVERNAME | \n
Initial catalog | \nRDConnectionBroker | \n
Trusted Connection | \nEnabled | \n
Table or view name | \nTermServerConnects | \n
Column name of TimeStamp | \nCreateTime | \n
Seconds delay | \n60 | \n
Column name of Message | \nMessage | \n
Include following columns as property | \n\n ServerName,TermServer,TermServerFqdn,TermServerNetBios,TermServerIp, \nPool,PoolName,UserId,UserName,UserDomain,SessionId,CreateTime,DisconnectTime, \nInitialProgram,ProtocolType,State,ResolutionWidth,ResolutionHeight,ColorDepth,Message \n | \n
Log application name as property | \nTermServerConnects | \n
Event Level | \n2 | \n
Valid local time period | \n\n |
And an equivalent app for TermServerDisconnects. We need 2 instances, because we want different messages for connects and disconnects. This will give you essentially all the useful structured properties related to each connection - sadly, not the client IP which I'd have liked, but enough to be able to build fancy dashboards and alerts. We actually use this to detect a mass user disconnection, which would indicate a networking issue.
\nAll you need to expose your new logs is a signal or two, which could be as simple as a signal looking for \"Application = 'TermServerConnects' OR Application = 'TermServerDisconnects'\".
\nIt's worth noting here that the Remote Desktop Connection Broker has a quirk - it only logs sessions to the database once per minute, so this is where the \"Seconds Delay\" setting comes in for Seq.Input.MSSQL. By setting both \"Refresh every X seconds\" and \"Seconds Delay\" to 60, we allow time for new connections and disconnections to be logged in their respective input.
\nSeconds Delay started as an inbuilt 1 second delay to address an issue with timestamps that didn't measure in milliseconds (the Job Agent logs from my last post), but I made it configurable for usages like this. I recently updated Seq.Input.MSSQL to reflect a longer maximum delay (up to 24 hours!) because, as it turns out, there's scenarios where a 1 hour delay or more might be needed to ensure that new logs are picked up - apps \"lying\" about the timestamp they're inserting to a database well after that time.
\nThe kind of power that we've built into Seq.Input.MSSQL is awesome, and this is just one simple example of usage. I've only barely scraped the surface of its capabilities - mapping properties to database values that can feed into alerting apps is something that's usage dependent, but amazing when you can do it!
\nWe recently added the ability to specify SQL connect timeout, query timeout, and even set encrypted connections. Check it out in your own Seq instance!
\n", "author": { "name": "MattMofDoom" }, "tags": [ "Terminal Server", "TermServer", "Seq", "SQL", "Remote Desktop", "MSSQL", "Input", "C#", "Apps" ], "date_published": "2021-10-20T15:36:31-07:00", "date_modified": "2022-01-22T16:27:09-08:00" }, { "id": "https://mattmofdoom.com/using-seqinputmssql-to-read-sql-job-agent-tables-as-logs/", "url": "https://mattmofdoom.com/using-seqinputmssql-to-read-sql-job-agent-tables-as-logs/", "title": "Using Seq.Input.MSSQL to read SQL Job Agent tables as logs!", "summary": "Following on from my last post about using Seq.Input.MSSQL to read Endpoint Protection table entries as Seq logs, I wanted to share something that has really made a huge difference for us - turning the SQL Agent job tables into Seq logs! This was actually the first full MSSQL Input integration that we went live with, and it turned out to be the most useful by far. When it comes to the SQL Agent jobs, you can't simply point Seq.Input.MSSQL at a table and have it come out with something that is readily translatable to a Seq log with structured…", "content_html": "
Following on from my last post about using Seq.Input.MSSQL to read Endpoint Protection table entries as Seq logs, I wanted to share something that has really made a huge difference for us - turning the SQL Agent job tables into Seq logs!
\nThis was actually the first full MSSQL Input integration that we went live with, and it turned out to be the most useful by far. When it comes to the SQL Agent jobs, you can't simply point Seq.Input.MSSQL at a table and have it come out with something that is readily translatable to a Seq log with structured logging - but you can make a view to do the necessary conversions.
\nWhen I originally did this piece - with the help of a whole bunch of Googling - I found that there was one particular flaw - Seq.Input.MSSQL assumed that a timestamp had milliseconds, but the Job Agent tables do not record milliseconds. The MSSQL Input always tracks the last time that a query was run, so that it doesn't do full table scans every time. In this case, that means that it was possible to 'miss' logs because the StatusDateTime was always at 0 milliseconds - but the SQL input had already gone past that time.
\nThat was readily resolved by adding an inbuilt delay of 1 second to the input, which worked well. Later I made this configurable, because I also hit a different case where logs were added to a table only once per minute - which led to a similar type of problem that was resolvable by configuring a longer delay.
\nThe below table does all the conversions to a timestamp and provides essentially everything useful that is available from the sysjobs, sysjobhistory, and sysjobsteps tables.
\n\nuse msdb
GO
CREATE VIEW AgentJobs AS
SELECT
sjh.instance_id,
@@servername as ServerName
, DATEADD(SECOND,(run_duration/10000 * 60 * 60) + (run_duration/100%100 * 60) + (run_duration%100 ),dbo.agent_datetime(sjh.run_date, sjh.run_time)) StatusDateTime
, dbo.agent_datetime(sjh.run_date, sjh.run_time) RunDateTime
, STUFF(STUFF(RIGHT('00000' + CAST(run_duration AS VARCHAR(6)),6),3,0,':'),6,0,':') Duration
, (run_duration/10000 * 60 * 60) + (run_duration/100%100 * 60) + (run_duration%100 ) DurationSecs
, sj.name JobName
, sjh.step_id StepId
, ISNULL(sjs.step_name, 'Job Status') StepName
, CASE sjh.run_status
WHEN 0 THEN 'Failed'
WHEN 1 THEN 'Succeeded'
WHEN 2 THEN 'Retry'
WHEN 3 THEN 'Canceled'
WHEN 4 THEN 'In Progress'
END RunStatus
, sjh.message AS StepMessage
FROM dbo.sysjobs sj
INNER JOIN dbo.sysjobhistory sjh ON sj.job_id = sjh.job_id
LEFT OUTER JOIN dbo.sysjobsteps sjs ON sjh.job_id = sjs.job_id AND sjh.step_id = sjs.step_id
WHERE DATEADD(SECOND,(run_duration/10000 * 60 * 60) + (run_duration/100%100 * 60) + (run_duration%100 ),dbo.agent_datetime(sjh.run_date, sjh.run_time)) > dateadd(dd,-1, convert(date,getdate()))
GO
CREATE LOGIN [DOMAIN\\SEQSERVER$] FROM WINDOWS
GO
GRANT SELECT ON dbo.AgentJobs to [DOMAIN\\SEQSERVER$]
GO
And then all that's required is a Seq.Input.MSSQL instance to read in the view:
\nProperty | \nValue | \n
---|---|
Title | \nSERVERNAME | \n
Refresh every x seconds | \n5 | \n
Server instance name | \nSERVERNAME | \n
Initial catalog | \nmsdb | \n
Trusted Connection | \nEnabled | \n
Table or view name | \nAgentJobs | \n
Column name of TimeStamp | \nStatusDateTime | \n
Column name of Message | \nStepMessage | \n
Include following columns as property | \n\n ServerName,StatusDateTime,RunDateTime,Duration, \nDurationSecs,JobName,StepId,StepName,RunStatus,StepMessage \n | \n
Log application name as property | \nAgentJobs | \n
Event Level | \n0 | \n
Valid local time period | \n\n |
That's it. Simple as that. And then all you need to expose your new logs is a signal, which could be as simple as a signal looking for \"Application = 'AgentJobs'\".
\nMSSQL Input will only log the Application property when it's logging a result from your query. We have an \"Application Property Name\" config which allows you to change the name of the property - I often use AppName in my Seq apps. Up to v1.2.0, if you didn't configure this, it would not log a property, but I've just put a pull request in that will change this to always log an Application property, defaulting to the title of your instance.
\nIf you want to alert on errors, you might want to get even more specific with your signal. I found that a signal like the following is useful, since it allows excluding jobs that you don't want to alert, and also filters out jobs that only have a GUID as a name.
\n\n\n
\n
And from here, you can configure a Seq.App.Opsgenie or Seq.App.Atlassian.Jira instance to watch the signal and raise the appropriate alert or ticket, similar to my illustration for the last post on this.
\nI hope people find this useful - the SQL Agent job logs are a goldmine of information about your scheduled jobs, and getting them into Seq can be a game changer for monitoring and alerting!
\n", "author": { "name": "MattMofDoom" }, "tags": [ "Structured logging", "Seq", "SQL", "OpsGenie", "MSSQL", "Jira", "Input", "C#", "Apps" ], "date_published": "2021-10-05T00:05:00-07:00", "date_modified": "2022-01-22T16:26:12-08:00" }, { "id": "https://mattmofdoom.com/dst-update-for-eventx-trilogy-for-seq/", "url": "https://mattmofdoom.com/dst-update-for-eventx-trilogy-for-seq/", "title": "DST update for EventX Trilogy for Seq now available!", "summary": "While I was investigating a case where Event Schedule for Seq had duplicate multi-log events occurring, I noticed a Daylight Savings Time issue with the unit tests, which was evident due to an upcoming DST changeover here in Australia. It was readily apparent, of course, that with their shared DNA, the Event Timeout, Event Threshold, and Event Schedule apps would all have the same issue. There was a mismatch between the evaluation of dates with the difference in time resulting from the DST change. \"Luckily\", my NBN connection was down pending a visit from NBN Co, and with relatively poor…", "content_html": "
While I was investigating a case where Event Schedule for Seq had duplicate multi-log events occurring, I noticed a Daylight Savings Time issue with the unit tests, which was evident due to an upcoming DST changeover here in Australia.
\nIt was readily apparent, of course, that with their shared DNA, the Event Timeout, Event Threshold, and Event Schedule apps would all have the same issue. There was a mismatch between the evaluation of dates with the difference in time resulting from the DST change.
\n\"Luckily\", my NBN connection was down pending a visit from NBN Co, and with relatively poor mobile data speeds in my area, I was able to spend some time on the issue to ensure that the upcoming DST changeover would correctly evaluate times.
\nAs part of the fix, I added a new Dates.ParseInterval and Dates.UtcParseInterval to Lurgle.Dates. These aren't anything too special - they're essentially just a DateTime.ParseExact that simplifies the overall UtcRollover processing.
\nThe aforementioned duplicate multi-log issue turned out to be a result of the logging of events taking greater than the 1 second interval between evaluation, and was readily addressed by adding a flag while logging was occurring.
\nEvent Threshold has also now been updated to include the Handlebars implementation that was previously added to Event Timeout and Event Schedule. This was done fairly quickly as an adjunct part of the bug fixing process, to bring the codebase up to a similar level, but should be functional without regressions. The Handlebars values that can be used for Event Threshold are similar to Event Timeout, and are as follows:
\nvar payload = (IDictionary<string, object>)ToDynamic(new Dictionary<string, object>
{
{ \"AppName\", config.AppName },
{ \"TimeNow\", DateTime.Now.ToLongTimeString() },
{ \"DateNowLong\", DateTime.Now.ToLongDateString() },
{ \"DateNowShort\", DateTime.Now.ToShortDateString() },
{ \"DateTimeNow\", DateTime.Now.ToString(\"F\") },
{ \"StartTime\", counters.StartTime.ToString(\"F\") },
{ \"EndTime\", counters.EndTime.ToString(\"F\") },
{ \"Threshold\", config.ThresholdInterval.TotalSeconds },
{ \"ThresholdMins\", config.ThresholdInterval.TotalMinutes.ToString(\"N2\") },
{ \"ThresholdHours\", config.ThresholdInterval.TotalHours.ToString(\"N2\") },
{ \"RepeatThreshold\", config.RepeatThreshold },
{ \"SuppressTime\", config.SuppressionTime.TotalSeconds },
{ \"SuppressTimeMins\", config.SuppressionTime.TotalMinutes.ToString(\"N2\") },
{ \"SuppressTimeHours\", config.SuppressionTime.TotalHours.ToString(\"N2\") },
{ \"RepeatSuppressTime\", config.SuppressionTime.TotalSeconds },
{ \"RepeatSuppressTimeMins\", config.RepeatThresholdSuppress.TotalMinutes.ToString(\"N2\") },
{ \"RepeatSuppressTimeHours\", config.RepeatThresholdSuppress.TotalHours.ToString(\"N2\") },
{ \"Tags\", string.Join(\",\", config.Tags) },
{ \"Responders\", config.Responders ?? \"\" },
{ \"Priority\", config.Priority ?? \"\" },
{ \"ProjectKey\", config.ProjectKey ?? \"\" },
{ \"DueDate\", config.DueDate ?? \"\" },
{ \"InitialTimeEstimate\", config.InitialTimeEstimate ?? \"\" },
{ \"RemainingTimeEstimate\", config.RemainingTimeEstimate ?? \"\" }
});
\n
The resulting updates should be live now!
\nNote - Current builds of Seq have a bug with the Nuget v2 API affecting update of Event Timeout. The workaround is to either manually update Event Timeout to the current build via the Manage page for the app, or change your Nuget feed settings.
\n\n
Seq.App.EventTimeout | \n|
---|---|
Seq.App.EventThreshold | \n|
Seq.App.EventSchedule | \n|
Lurgle.Dates | \n
Among a bunch of stars in the Seq ecosystem, Seq.Input.MSSQL has to be one of the most ambitious and coolest. This Seq input app allows you to turn just about anything with a timestamp in a SQL database into Seq logs - which in turn can power your monitoring and alerting, exposing data and events that can't be acquired any other way.
\nI've spent a fair bit of time with this app, and honestly- it underscores a huge proportion of critical SLAs for us. Early on, we worked out a scheme to allow SQL Agent jobs to be ingested to Seq using the MSSQL input, and that means we can alert on scheduled job failures from some of our busiest processing nodes. It did mean that I needed to roll up my sleeves and amend the code to allow handling timestamps that didn't include milliseconds, along with addressing a small bug for new instances of the app. This was actually the first Seq app from another author that I contributed to, so it holds a special place in my heart for that reason too.
\nMy recent efforts in fostering interoperability in the Seq ecosystem put Seq.Input.MSSQL into a particularly unique position. This was an app that was responsible for drawing data from a SQL database, and turning it into logs. That provided the opportunity for the MSSQL input to become an absolute powerhouse for mapping properties like Tags, Priority, Responder, and Jira-compatible tags like Project Key, Initial Estimate, Remaining Estimate, and Due Date. There was also an open issue/enhancement request for mapping the Seq event level to database values!
\nThe opportunity that I talk about is that we can draw these properties from the database, and explicitly map the \"expected values\" to property values that can work with other applications. For example, I've integrated the Microsoft Endpoint Protection (previously known as System Center Endpoint Protection) database into Seq using the MSSQL input, and this gives me the opportunity to use the 'Severity' column in my view as a mapping for both the Event Level in Seq, and the Priority value for Jira ... and if I wanted, I could extend this to simultaneously map OpsGenie compatible priorities for a multi-purpose input!
\nWhile I've been using an unofficial build of this for some time, v1.2.0 of Seq.Input.MSSQL has now been released with all the new goodies - many thanks to Michael Hinni for the collaboration! Sadly, we did find that integrated authentication is a Windows-only feature, so we've found that (for now at least) the app can only run on a Windows Seq instance. One possibility is a separate package for Docker instances that omits the incompatible feature - but integrated authentication (where available) is a great way to limit privileges for the Seq instance, without using SQL authentication.
\nSo for a start, let's share how I integrated the Endpoint Protection database to Seq using Seq.Input.MSSQL. It's a good illustration of how powerful this app is, and the opportunities that it opens up. This allows us to see all Windows Defender / Endpoint Protection detections of malware in real time, for an infrastructure piece that absolutely has no ready way to otherwise integrate with Seq.
\nFirstly, I create a view in my Microsoft Endpoint Manager (System Center Configuration Manager, or SCCM) database. This isn't unusual for the integration - it allows you some greater control over how you produce a result, and it's often the case that I want to pull in multiple sources of data to allow the logs to be well structured and meaningful.
\n\nuse CM_SC1
GO
CREATE VIEW MalwareLog AS select
@@servername as ServerName,
ResourceID,
ComputerName,
Status,
Malware,
Category,
Severity,
Summary,
Cleaned,
Path,
Username,
dateadd(hh, datediff(hh, getutcdate(), getdate()), MIN(DetectionTime)) AS FirstDetection,
dateadd(hh, datediff(hh, getutcdate(), getdate()), MAX(DetectionTime)) AS LastDetection,
dateadd(hh, datediff(hh, getutcdate(), getdate()), MAX(LastMessageTime)) AS LastMessage,
COUNT(*) as InfectionCount,
EngineVersion,
dateadd(hh, datediff(hh, getutcdate(), getdate()), LastFullScanDateTimeStart) AS LastFullScanDateTimeStart,
dateadd(hh, datediff(hh, getutcdate(), getdate()), LastFullScanDateTimeEnd) AS LastFullScanDateTimeEnd,
dateadd(hh, datediff(hh, getutcdate(), getdate()), LastQuickScanDateTimeStart) AS LastQuickScanDateTimeStart,
dateadd(hh, datediff(hh, getutcdate(), getdate()), LastQuickScanDateTimeEnd) AS LastQuickScanDateTimeEnd,
'Malware Detected on {ComputerName}: Malware Name {Malware}, Category {Category}, Severity {Severity}, Infections {InfectionCount}, Cleaned {Cleaned}' AS Message
from
(
select
t.ResourceID,
ISNULL(s.Netbios_Name0 + '.' + s.Full_Domain_Name0, s.Netbios_Name0) as ComputerName,
t.Path,
t.UserName,
CASE WHEN ai.ComputerStatus = 1 THEN 'None' WHEN ai.ComputerStatus = 2 THEN 'Cleaned' WHEN ai.ComputerStatus = 3 THEN 'Pending' WHEN ai.ComputerStatus = 4 THEN 'Failed' ELSE 'Unknown' END AS Status,
tc.Name as Malware,
tcat.Category,
tsev.Severity,
tsum.Summary,
t.DetectionTime,
CASE WHEN v.LastMessageTime IS NULL THEN t.DetectionTime ELSE v.LastMessageTime END as LastMessageTime,
case when t.ActionSuccess=0 then 'Failed' when t.ActionSuccess=1 and t.PendingActions!=0 then 'Pending' when t.ActionSuccess=1 and t.PendingActions=0 then 'Cleaned' else 'Unknown' end as Cleaned,
ah.AntivirusSignatureVersion EngineVersion,
ah.LastFullScanDateTimeStart LastFullScanDateTimeStart,
ah.LastFullScanDateTimeEnd LastFullScanDateTimeEnd,
ah.LastQuickScanDateTimeStart LastQuickScanDateTimeStart,
ah.LastQuickScanDateTimeEnd LastQuickScanDateTimeEnd
from v_GS_Threats t
join v_R_System s on t.ResourceID=s.ResourceID
join v_FullCollectionMembership c on t.ResourceID=c.ResourceID
left join v_ThreatCatalog tc on t.ThreatID=tc.ThreatID
left join v_ThreatCategories tcat on t.CategoryID = tcat.CategoryID
left join v_ThreatSeverities tsev on t.SeverityID = tsev.SeverityID
left join v_ThreatSummary tsum on tc.SummaryID = tsum.SummaryID
left join v_GS_AntimalwareHealthStatus ah on t.ResourceID = ah.ResourceID
left join v_GS_AntimalwareInfectionStatus ai on t.ResourceID = ai.ResourceID
left join vEP_LastMalware v ON v.DetectionID = t.DetectionID
where c.CollectionID='SMSDM003'
) as Infections
group by ResourceID, ComputerName, Malware, Category, Severity, Summary, Path, Username, Status,EngineVersion, LastFullScanDateTimeStart, LastFullScanDateTimeEnd, LastQuickScanDateTimeStart, LastQuickScanDateTimeEnd, Cleaned
order by LastDetection DESC OFFSET 0 ROWS
GO
CREATE LOGIN [DOMAIN\\SEQSERVER$] FROM WINDOWS
GO
CREATE USER [DOMAIN\\SEQSERVER$] FOR LOGIN [DOMAIN\\SEQSERVER$]
GO
GRANT SELECT ON MalwareLog to [DOMAIN\\SEQSERVER$]
GO
This gives us a view with all the data that we need to produce nicely structured logs that can power alerts down the line.
\nNext, we configure an instance of Seq.Input.MSSQL with the following settings;
\nProperty | \nValue | \n
---|---|
Title | \nMalware Logs | \n
Refresh every x seconds | \n300 | \n
Server instance name | \nUCS1-S-SCFG01 | \n
Initial catalog | \nCM_HQ1 | \n
Trusted Connection | \nEnabled | \n
Username | \n\n |
Password | \n\n |
Table or view name | \nMalwareLog | \n
Column name of TimeStamp | \nLastMessage | \n
Seconds delay | \n1 | \n
Column name of Message | \nMessage | \n
Include following columns as property | \nServerName,ResourceID,ComputerName,Status,Malware,Category,Severity,Summary,Cleaned, Path,Username,FirstDetection,LastDetection,LastMessage,InfectionCount,EngineVersion, LastFullScanDateTimeStart,LastFullScanDateTimeEnd,LastQuickScanDateTimeStart,LastQuickScanDateTimeEnd,Message | \n
Log application name as property | \nMalwareLogs | \n
Column name of Event Level | \nSeverity | \n
Event Level Mapping | \nSevere=Error,High=Error,Medium=Warning,Low=Warning,Not Yet Classified=Warning | \n
Serilog.Events.LogEventLevel | \n4 | \n
Tags | \nAntivirus,Malware,Infection,Seq | \n
Column name of Priority | \nSeverity | \n
Value mapping for Priority | \nSevere=Highest,High=High,Medium=Medium,Low=Low,Not Yet Classified=Low | \n
Valid local time period | \n\n |
Then we just need an appropriate signal that alert apps can listen to:
and finally - because we want this to go to Jira - an instance of the Seq.Atlassian.Jira app that listens to our new signal.
\nConfig | \nValue | \n
---|---|
Title | \nAnti-Malware Alerts to Jira | \n
Stream incoming events | \nEnabled | \n
Signal | \nAnti-malware Alerts | \n
Allow manual input | \nDisabled | \n
Re-order input by timestamp | \nDisabled | \n
Jira Url | \nhttps://jira.domain.com | \n
Comma separated list of event levels | \n\n |
Project Key Property | \n\n |
Jira Project Key | \nSD | \n
Jira Project Components | \n\n |
Jira Issue Labels | \nITKC7,ITKC38 | \n
Include event tags | \nEnabled | \n
Event tag property | \nTags | \n
Seq Event Id custom field # from Jira | \n\n |
Jira Issue Type | \nService Request | \n
Priority Property | \nPriority | \n
Jira Priority or Priority Mapping | \nHighest=Highest,High=High,Medium=Medium,Low=Low,Lowest=Low | \n
Default Priority | \nHighest | \n
Assignee Property | \n\n |
Assignee | \n\n |
Jira Summary | \n[MS Endpoint] Malware ({{Category}}) found on {{ComputerName}}: {{Malware}} | \n
Jira Description | \n*Malware was detected* \\n \\n ||Computer|{{ComputerName}}|| ||Username|{{Username}}|| ||Category|{{Category}}|| ||Malware|{{Malware}}\\n_{{Summary}}_|| ||Severity|{{Severity}}|| ||Status|{{Status}}|| ||Cleaned|{{Cleaned}}|| ||Path|{{Path}}|| ||Infection Count|{{InfectionCount}}|| ||First Detection|{{FirstDetection}}|| ||Last Message|{{LastMessage}}|| ||Engine Version|{{EngineVersion}}|| ||Seq Event:|[{{$Message}}|{{$EventUri}}]|| | \n
Full Details as Description | \nDisabled | \n
Full Details as Comment | \nDisabled | \n
Properties As Comment | \nDisabled | \n
Initial Estimate Property | \n\n |
Initial Estimate | \n\n \n \n\n | \n
Remaining Estimate Property | \n\n |
Remaining Estimate | \n\n |
Due Date Property | \n\n |
Due Date | \n1d | \n
Jira Username | \njirauser | \n
Jira Password | \n[password] | \n
and the end result (with a few items blanked out) when malware is detected:
This is a dynamic Jira ticket, with priority mapped to the severity of the infection. We haven't used all of the possible mappings here - I could have auto-assigned the ticket based on severity, for example - but we do utilise the ability to read the tags from Seq.Input.MSSQL and combine them with the tags defined in the Seq.App.Atlassian.Jira application, which will then pass through to the Jira issue.
\nIt might escape notice, but the implementation for Seq.Input.MSSQL means that, with the right config and signals, I could use a single instance of the MSSQL input to raise an alert to Jira for lower priority malware alerts, and an OpsGenie alert for a severe malware infection. I don't currently need that - but it's certainly an option.
\nI have a few other cool implementations with Seq.Input.MSSQL, like the aforementioned SQL Agent Jobs, logging Remote Desktop connections and disconnections by monitoring the connection broker database, and even simple database queries that do a 1:1 map of columns to logs and properties. I'll look to share at least some of those in the future.
\nI hope this helps to illustrate the power of the MSSQL input. It's a game changer when it comes to exposing data that can be monitored and alerted - especially where it's an application with no other prospect of sending logs to Seq. Michael did an amazing job with the app, and I'm proud to have contributed to making it even more powerful!
\n\n
", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Structured logging", "Seq", "SQL", "MSSQL", "Jira", "Input", "C#", "Apps" ], "date_published": "2021-09-29T17:34:38-07:00", "date_modified": "2022-01-22T16:24:57-08:00" }, { "id": "https://mattmofdoom.com/event-schedule-for-seq-v1031-scheduled-months/", "url": "https://mattmofdoom.com/event-schedule-for-seq-v1031-scheduled-months/", "title": "Event Schedule for Seq v1.0.31 - Scheduled months!", "summary": "We've started to really make use of the newest member of the \"EventX Trilogy\" to trigger scheduled events. It's really useful to be able to trigger monthly scheduled events for IT maintenance tasks, especially tied to a Seq.App.Atlassian.Jira instance that raises the ticket at the right priority, with the right responder, due date, time estimate, and all the date token goodness! The joy of this is that we get inbuilt logging for the process - we can see the event being generated, and then being picked up and sent to Jira. Any errors are visible in Seq, and because of…", "content_html": "
We've started to really make use of the newest member of the \"EventX Trilogy\" to trigger scheduled events. It's really useful to be able to trigger monthly scheduled events for IT maintenance tasks, especially tied to a Seq.App.Atlassian.Jira instance that raises the ticket at the right priority, with the right responder, due date, time estimate, and all the date token goodness!
\nThe joy of this is that we get inbuilt logging for the process - we can see the event being generated, and then being picked up and sent to Jira. Any errors are visible in Seq, and because of the logging approach, it's really easy to test a config and see the result without sending it all the way to Jira.
\nIn some ways, we're finding that of all the capabilities the EventX apps have brought to uplift our Seq monitoring and alerting, Event Schedule has the most potential to transform ongoing maintenance in a meaningful and readily extensible way.
\nWe found that we could then start to contemplate more long range tickets - quarterly, bi-annual, annual. One hitch - Event Schedule didn't support being able to configure specific months to run. Well, until now that is.
\nI added a new config item to Event Schedule called \"Months of year\". This is simple enough to use - simply add the months that you want included in the schedule, either as long (January) or short (Jan) month names.
\nThe net result is an ability to specify a config like this:
\n\n\n
which will raise the configured log message on the first day of October, January, April, and July. You'll note some use of our date expressions in the template, leveraging the ability to specify the period for the review ticket.
\nThis, along with the existing multi-log token and corresponding responder functionality, means that we can create multiple Jira tickets with different purposes, each with their own assignee, every 3 months when the review is due.
\nThe net result is automated Jira tickets that look like this:
\n\n\n
Clear, concise, and accurate!
\nI updated Lurgle.Dates for this functionality, refining the day of week and adding day of month parsing. You can now pass short weekday names (eg. Tue, Wed) and they will be properly mapped, and the same applies for months. That means I can add the refinement to the other EventX apps, and of course other devs may make use of Lurgle.Dates themselves!
\nFinally, I also carried the Handlebars templates recently added to Event Timeout over to Event Schedule. While the Handlebars values are slightly different in Event Schedule, this adds a new dimension of functionality and retains the capabilities shown off in the Event TImeout post.
\nFor reference, the Handlebars values that can be used for Event Schedule are;
\n\nvar payload = (IDictionary<string, object>)ToDynamic(new Dictionary<string, object>
{
{ \"AppName\", config.AppName },
{ \"TimeNow\", DateTime.Now.ToLongTimeString() },
{ \"DateNowLong\", DateTime.Now.ToLongDateString() },
{ \"DateNowShort\", DateTime.Now.ToShortDateString() },
{ \"DateTimeNow\", DateTime.Now.ToString(\"F\") },
{ \"StartTime\", counters.StartTime.ToString(\"F\") },
{ \"RepeatSchedule\", config.ScheduleInterval.TotalSeconds },
{ \"RepeatScheduleMins\", config.ScheduleInterval.TotalMinutes.ToString(\"N2\") },
{ \"RepeatSuppressTimeHours\", config.ScheduleInterval.TotalHours.ToString(\"N2\") },
{ \"Tags\", string.Join(\",\", config.Tags) },
{ \"Responders\", config.Responders ?? \"\" },
{ \"Priority\", config.Priority ?? \"\" },
{ \"ProjectKey\", config.ProjectKey ?? \"\" },
{ \"DueDate\", config.DueDate ?? \"\" },
{ \"InitialTimeEstimate\", config.InitialTimeEstimate ?? \"\" },
{ \"RemainingTimeEstimate\", config.RemainingTimeEstimate ?? \"\" }
});
\n
Hopefully others will find Event Schedule as useful as we are!
\nSeq.App.EventSchedule | \n
---|
I've previously said that I really like Handlebars implementations in other Seq apps, and I felt that there were opportunities to incorporate these into apps like Event Timeout. One of the things to be cautious of here, though, is that we might already be passing a Handlebars template as part of the Message or Description, for use in a downstream app.
\nSo in adding this as a feature, the first consideration was that Handlebars has to be optional, to allow templates to be passed through to other Seq apps such as Jira or Opsgenie.
\nThe second consideration - we don't have opportunity to pass properties from log events in Event Timeout, because we're creating logs based on timeouts - log events that didn't happen. Any properties that will be used for Event Timeout need to relate to the counters and properties that we have available.
\nNote - Current builds of Seq have a bug with the Nuget v2 API affecting update of Event Timeout. The workaround is to either manually update Event Timeout to the current build via the Manage page for the app, or change your Nuget feed settings.
\nI've accordingly created a selection of Handlebars values that can be used. These are shown in the below code extract:
\nvar payload = (IDictionary<string, object>)ToDynamic(new Dictionary<string, object>
{
{ \"AppName\", config.AppName },
{ \"TimeNow\", DateTime.Now.ToLongTimeString() },
{ \"DateNowLong\", DateTime.Now.ToLongDateString() },
{ \"DateNowShort\", DateTime.Now.ToShortDateString() },
{ \"DateTimeNow\", DateTime.Now.ToString(\"F\") },
{ \"StartTime\", counters.StartTime.ToString(\"F\") },
{ \"EndTime\", counters.StartTime.ToString(\"F\") },
{ \"Timeout\", config.TimeOut.TotalSeconds },
{ \"TimeoutMins\", config.TimeOut.TotalMinutes.ToString(\"N2\") },
{ \"TimeoutHours\", config.TimeOut.TotalHours.ToString(\"N2\") },
{ \"RepeatTimeout\", config.RepeatTimeout },
{ \"SuppressTime\", config.SuppressionTime.TotalSeconds },
{ \"SuppressTimeMins\", config.SuppressionTime.TotalMinutes.ToString(\"N2\") },
{ \"SuppressTimeHours\", config.SuppressionTime.TotalHours.ToString(\"N2\") },
{ \"RepeatSuppressTime\", config.SuppressionTime.TotalSeconds },
{ \"RepeatSuppressTimeMins\", config.RepeatTimeoutSuppress.TotalMinutes.ToString(\"N2\") },
{ \"RepeatSuppressTimeHours\", config.RepeatTimeoutSuppress.TotalHours.ToString(\"N2\") },
{ \"Tags\", string.Join(\",\", config.Tags) },
{ \"Responders\", config.Responders ?? \"\" },
{ \"Priority\", config.Priority ?? \"\" },
{ \"ProjectKey\", config.ProjectKey ?? \"\" },
{ \"DueDate\", config.DueDate ?? \"\" },
{ \"InitialTimeEstimate\", config.InitialTimeEstimate ?? \"\" },
{ \"RemainingTimeEstimate\", config.RemainingTimeEstimate ?? \"\" }
});
Arguably there may be other counters that are useful, but these stood out as the most likely to be used.
\nTo implement this, you would enable the new \"Use Handlebars\" setting, and configure the Log Message and/or Description using Handlebars expressions, eg.
\n{{AppName}} error after {{Timeout}} secs or {{TimeoutMins}} mins or {{TimeoutHours}} hours - Woe is me, this was meant to be sorted out by {dd-MM-yyyy-10M}
\nYou can see that the Handlebars expressions are drawn from the above list. You can also make use of inbuilt helpers to make this funkier (such as excluding empty properties). And the result is great!
It's also worth noting that we still retain the date expressions and tokens that were recently added to Event Timeout via Lurgle.Dates. This allows you to use the .NET custom date strings, with optional add/subtract modifiers, along with the simple date tokens that return parts of a date. The previous examples shown for these were:
\nAnd for simple date expressions:
\nWith the note, as always, that simple date tokens aren't great for calculating whole dates with addition/subtraction - they're more useful for scenarios like {MMMM-1} to simply state last month's name.
\nI plan to roll this enhancement into Event Threshold and Event Schedule as well, which will enable them to similarly use properties related to threshold violations or schedule logs.
\nSeq.App.EventTimeout | \n
---|
A while back I updated Lurgle.Logging to support new logging patterns. I've sat on it for a while, but some aspects of this weren't as well thought through as they could have been. Specifically, the idea of passing:
\n\nLog.Error(ex, \"Oh no! An error! {Message}\", Logging.NewCorrelationId(), args: ex.Message);
Log.Error(ex, \"Oh no! Barry had an error! {Message)\", \"Barry\", args: ex.Message);
was somewhat flawed. I allowed these static methods to take arguments, but the auto parameters for method name, source line number, and source file path would interfere with this, so that while one argument worked fine, you couldn't readily pass multiple arguments.
\nThat's pretty annoying, and not readily solvable without sacrificing the utility of capturing the calling method and so on. It's really the point of these methods, so it needed action.
\nHence - while I generally prefer not to do breaking changes - I've wound the patterns back somewhat. You can't pass log templates to these static methods anymore - but you can still use them as a shortcut to pass the log level.
\nHence the log patterns called out from LurgleTest in the original post have morphed somewhat - but they retain their Fluent nature, and I feel they're a bit more fully baked.
\n\nLog.Level().Add(\"{AppName:l} v{AppVersion:l} starting ...\");
Log.Add(\"Simple information log\");
Log.Add(LurgLevel.Debug, \"Simple debug log\");
Log.Information().Add(\"Information event\");
Log.Information().Add(\"Information event with {Properties:l}\", args: \"Properties\");
Log.Verbose().Add(\"Verbose event\");
Log.Verbose().Add(\"Verbose event with {Properties:l}\", args: \"Properties\");
Log.Debug().Add(\"Debug event\");
Log.Debug().Add(\"Debug event with {Properties:l}\", args: \"Properties\");
Log.Warning().Add(\"Warning event\");
Log.Warning().Add(\"Warning event with {Properties:l}\", args: \"Properties\");
Log.Error().Add(\"Error event\");
Log.Error().Add(\"Error event with {Properties:l}\", args: \"Properties\");
Log.Fatal().Add(\"Fatal event\");
Log.Fatal().Add(\"Fatal event with {Properties:l}\", args: \"Properties\");
Log.AddProperty(\"Barry\", \"Barry\").Warning(\"Warning event with {Barry:l}\");
Log.Error(new ArgumentOutOfRangeException(nameof(test))).Add(\"Exception: {Message:l}\", args: \"Error Message\");
Log.AddProperty(LurgLevel.Error, \"Barry\", \"Barry\").Add(\"Log an {Error:l}\", \"Error\");
Log.AddProperty(LurgLevel.Debug, \"Barry\", \"Barry\").Add(\"Just pass the log template with {Barry:l}\");
Log.AddProperty(new ArgumentOutOfRangeException(nameof(test)), \"Barry\", \"Barry\")
.Add(\"Pass an exception with {Barry:l}\");
Log.AddProperty(test).AddProperty(\"Barry\", \"Barry\").Add(
\"{Barry:l} wants to pass a dictionary that results in the TestDictKey property having {TestDictKey}\");
Log.Level().Warning(\"Override the event level and specify params like {Test:l}\", \"Test\");
Functionally, Log.Information / Verbose / Debug / Error / Fatal all now work more or less like Level and Exception, but with an implicit LurgLevel that matches the method. You can pass an exception, CorrelationId, and a showMethod like the older methods - but they no longer accept log templates or arguments.
\nThe one exception here is Log.Add() which can pass a message template, and optionally accept an Exception and/or LurgLevel. Otherwise - this has been reverted to old behaviour, where it did not accept arguments. Log.Add is for simple usages without passing arguments.
\nThe net effect is a more fully baked Lurgle that doesn't leave you with strange, unexpected, or even annoying results.
\nAnd the fancy links below, if you feel brave enough to behold the awesomeness of Lurgle and are somehow unable to use Nuget to add it to your project ...
\n\n \n
", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Structured logging", "Serilog", "Seq", "Lurgle.Logging", "Lurgle", "C#", "Apps" ], "date_published": "2021-08-15T02:09:44-07:00", "date_modified": "2022-01-22T16:21:47-08:00" }, { "id": "https://mattmofdoom.com/lurgledates-eventx-trilogy-seq-reporter-and-seqappatlassianjira-updated/", "url": "https://mattmofdoom.com/lurgledates-eventx-trilogy-seq-reporter-and-seqappatlassianjira-updated/", "title": "Lurgle.Dates, EventX Trilogy, Seq Reporter, and Seq.App.Atlassian.Jira updated - Improved date expressions and line breaks", "summary": "A quick post on app updates - both my own and others. Over the past week, I realised that when adding Jira date expression parsing to the codebase for the Jira app, I missed out on parsing \"w\" (weeks), which then was also missed in Lurgle.Dates and by extension the EventX Trilogy (Event Timeout, Event Threshold, and Event Schedule) and Seq Reporter, which all leverage the shared library. My excuse is that I don't often use weeks in time tracking or due dates, weak excuse though it is.. So - that meant updating the Jira app with Ali's assistance. In…", "content_html": "
A quick post on app updates - both my own and others.
\nOver the past week, I realised that when adding Jira date expression parsing to the codebase for the Jira app, I missed out on parsing \"w\" (weeks), which then was also missed in Lurgle.Dates and by extension the EventX Trilogy (Event Timeout, Event Threshold, and Event Schedule) and Seq Reporter, which all leverage the shared library. My excuse is that I don't often use weeks in time tracking or due dates, weak excuse though it is..
\nSo - that meant updating the Jira app with Ali's assistance. In that case, I figured we might as well address the line break issue that I previously noted in the post about using Handlebars. Now that the update to Seq.App.Atlassian.Jira is live, I've updated that post to reflect that \\n and \\r will work as expected!
\nI had already pre-emptively updated Lurgle.Dates and the downstream apps to support parsing weeks, so these are already ready to go. This means, for example, that Event Schedule can set a due date of 4w 1d 1h 1m - 4 weeks, 1 day, 1 hour, 1 minute - on your Jira issue.
\nEvent Schedule also now features a multi-line Description field, so that you can more clearly configure your description field that's bound to be parsed through as a Handlebars template!
\nBy way of a side note - the Seq.App.OpsGenie package went to a v1.0.0 release this week! This was a fun and productive collaboration that kicked off a lot of the interoperability improvements that we're attempting to foster in the overall Seq community, so I'm proud to have been a part of it!
\nAside from Seq Reporter, which is a download from Github, you can install the Seq apps to your Seq instance using the below package ids, and Lurgle.Dates to your Visual Studio projects via the usual means. Fancy links included for bonus points.
\nReminder - Current builds of Seq have a bug with the Nuget v2 API, so the workaround is to either manually update Event Timeout to the current build via the Manage page for the app, or change your Nuget feed settings.
\n\n
Seq.App.EventTimeout | \n|
---|---|
Seq.App.EventThreshold | \n|
Seq.App.EventSchedule | \n|
Seq.Client.Reporter | \n|
Lurgle.Dates | \n
One of the really useful things with Seq.App.Opsgenie was that Nick Blumhardt had integrated Handlebars templates to the app, using Handlebars.NET. I liked it so much that, when I contributed to Ali Özgür's excellent Seq.App.Atlassian.Jira project, I carried it over.
\nThere's a few oddities in the JIRA REST API, and interacting with it via user-entered data in a C# application can make it complex. It particularly doesn't handle Wiki markup like \\n and \\t as well as you might expect. Updating the Jira Description property to a \"LongText\" input type in Seq.App.Atlassian.Jira helps quite a bit, since you can enter a multi-line config that works more as you expected - tables and headers, for example, will properly render. We might need to circle back and do that for Seq.App.OpsGenie, so that editing templates is neater at the least.
\nThe latest version of Seq.App.Atlassian.Jira fixes the linebreak issue below - \\n and \\r will now work as expected. For the sake of compatibility, \\\\ and \\\\\\\\ are still accepted.
\nThe only oddity left is that, if we want to explicitly call for a linebreak, we need to express it as the Wiki format \"\\\\\" - but thanks to the interaction between Seq, the Jira app, and the REST API, we need to escape it, and put it on a new line or with spaces between it and the other content. So to make a line break, you need to put \" \\\\\\\\ \" for it to work as expected.
\nWhat I love about having Handlebars templates is that you get access to the inbuilt \"if\" and \"each\" statements. So if you want a simple Jira issue which shows the event message and then each property of the event as a table, you can do;
\nJira Summary | \n[Seq] - ({{$Level}}) {{$Message}} | \n
---|---|
Jira Description | \n{{$Message}} \\n \\n {{#each $Properties}}|| {{@key}} : | {{this}} || {{/each}} \\n Alert created by Seq | \n
And this will give you a very nicely formatted Key: and Value table.
\nOf course, you can get quite fancy with this, because Handlebars gives you access to any property of the event. The \"each\" statement gives you a quick way to list these in a generic table or other format, but you can leverage each property directly and even use the \"if\" statement to only include parts if the value exists.
\nJira Summary | \n[Seq] - ({{Priority}}) {{Summary}} | \n
---|---|
Jira Description | \n\n h2. {{Description}} | \n
So much more capability and power is possible here - not just tables, but all kinds of formatting and structure!
\nBecause Event Schedule for Seq is well placed to create tickets in Jira, I've updated its Scheduled log description property to allow the LongText setting type that permits multi-line inputs. Event Schedule doesn't use Handlebars templating (but does allow some and log token and date tags) but you can use this to create scheduled tickets that pass Jira formats through nicely.
This provides a Description property that can simply be passed through to Jira via Seq.App.Atlassian.Jira, so normal line feeds will work. An example of this is as follows:
Scheduled Log Message | \n{LogToken} - {LogTokenLong} - {MMMM yyyy-1m} - Review | \n
---|---|
Scheduled Log Description | \nPlease review the effectiveness of the IT Control \"{LogTokenLong}\" for *{MMMM yyyy-1m}*. This review looks at the evidence from the previous month ({MMM yy-1m}) to validate that controls remain effective. *Ensure that evidence is linked or attached, and that this clearly shows the effectiveness of the control.* | \n
Within the OpsGenie app, Nick had added access to the Seq event properties by way of tags that were prefixed with $, and I had extended to include a {{$EventUri}} tag that provided simple access to the URL for the event itself. I retained those for the Jira app.
\n\n{\"$Id\", evt.Id},
{\"$UtcTimestamp\", evt.TimestampUtc},
{\"$LocalTimestamp\", evt.Data.LocalTimestamp},
{\"$Level\", evt.Data.Level},
{\"$MessageTemplate\", evt.Data.MessageTemplate},
{\"$Message\", evt.Data.RenderedMessage},
{\"$Exception\", evt.Data.Exception},
{\"$Properties\", properties},
{\"$EventType\", \"$\" + evt.EventType.ToString(\"X8\")},
{\"$Instance\", host.InstanceName},
{\"$ServerUri\", host.BaseUri},
// Note, this will only be valid when events are streamed directly to the app, and not when the app is sending an alert notification.
{
\"$EventUri\",
string.Concat(host.BaseUri, \"#/events?filter=@Id%20%3D%20'\", evt.Id, \"'&show=expanded\")
}
An oddity between OpsGenie and Jira - both Atlassian properties - is that OpsGenie accepts HTML markup and I can't find any indication that Wiki markup is acceptable, while of course Jira only accepts Wiki markup. This means that if OpsGenie creates Jira tickets from your Seq alerts, and you've used HTML, you might get some oddly formatted JIra tickets.
\nAn OpsGenie alert config in the current version of Seq.Apps.OpsGenie might look like:
\nAlert message | \n[Seq] - {{$Level}}: {{AlertTitle}} - {{Condition}} | \n
---|---|
Alert description | \n\n <p>{{$Level}} - {{AlertTitle}}<br /><br /></p><p><table>{{#each $Properties}}<tr><td><b>{{@key}}:</b></td><td>{{this}}</td></tr>{{/each}}</table></p><p>Alert created by Seq</p> \n | \n
which is going to be a mess in Jira. To address this, we use Automation for Jira Lite to translate our HTML alerts to Opsgenie into Jira markup.
\nThis takes the form of an \"Edit issue fields\" on the \"Description\" field - I show the rule below, but we're leveraging Smart Values functionality to do the following;
\n{{issue.description.replaceAll(\"<br />\",\"\\n\").replaceAll(\"<b>\", \"*\").replaceAll(\"</b>\",\"*\").replaceAll(\"<p>\",\"\").replaceAll(\"</p>\",\"\\n\\n\").replaceAll(\"<table>\",\"\").replaceAll(\"<tr>\",\"|\").replaceAll(\"</tr>\",\"|\\n\").replaceAll(\"</td><td>\",\"|\").replaceAll(\"<td>\",\"|\").replaceAll(\"</td>\",\"|\").replaceAll(\"</table>\",\"\")}}
\nThis is a neat translation based on only updating HTML fields that we use within our Handlebars template for OpsGenie.
I hope this is of some assistance in getting people started with nicely formatted templates in Seq, and in translating OpsGenie HTML to Jira!
\n\n
", "author": { "name": "MattMofDoom" }, "tags": [ "Seq", "OpsGenie", "Jira", "Handlebars", "Event Schedule", "C#", "Apps" ], "date_published": "2021-08-08T17:02:04-07:00", "date_modified": "2022-01-22T16:18:01-08:00" }, { "id": "https://mattmofdoom.com/playing-nicely-with-others-in-the-seq-ecosystem/", "url": "https://mattmofdoom.com/playing-nicely-with-others-in-the-seq-ecosystem/", "title": "Playing nicely with others in the Seq ecosystem", "summary": "You would realise by now that I'm quite a fan of Seq. It's hard not to be, when you can download a free single user license and get started with a trial, a POC, or designing your monitoring and infrastructure. The growth in open source apps for Seq over the years that I've played with it helps, too. Here is a reasonably inexpensive structured logging server with a fairly low bar to get started, with a growing community that believes in giving back by creating and open sourcing their own apps to extend Seq's capabilities. I would class the current…", "content_html": "
You would realise by now that I'm quite a fan of Seq. It's hard not to be, when you can download a free single user license and get started with a trial, a POC, or designing your monitoring and infrastructure. The growth in open source apps for Seq over the years that I've played with it helps, too. Here is a reasonably inexpensive structured logging server with a fairly low bar to get started, with a growing community that believes in giving back by creating and open sourcing their own apps to extend Seq's capabilities.
\nI would class the current Seq app landscape as largely fitting into three main categories; this isn't an exhaustive list and of course my own apps show up .. you can readily see more examples over at Nuget.
\nType | \nNature | \nExamples | \n
Input apps | \nApps which receive input and ingest them into Seq | \n\n Seq.Input.MSSql | \n
Processing apps | \nApps which process ingested logs and transform them or produce results | \nSeq.App.Timeout Seq.App.EventTimeout Seq.App.EventThreshold | \n
Alert/output apps | \nApps which react to logs or dashboard alerts and send them 'somewhere' | \n\n Seq.App.OpsGenie | \n
And then of course there are 'freak' apps like Seq.App.OpsGenieHeartbeat which don't really fit the above, because it's an app that is predominantly concerned with external interaction ... it does produce some logs, but it's mostly leveraging Seq's existence to implement an OpsGenie feature - it's not really receiving, processing, or reacting to anything.
\nThere are other apps which could loosely fit into the above - for example, Seq.Client.WindowsLogins might arguably fit as an \"input\" app, and Seq.Client.Reporter could fit as a \"processing\" or \"output\" app - but they are external to Seq itself, and are simply implementations of Serilog (via Lurgle.Logging) and the Seq API so my impulse is not to categorise these in the same way.
\nSeq is quite free-form and \"rules free\" about how apps interact (or don't) and it affords a lot of flexibility in implementation. An input app may or may not be used (since you might be ingesting directly to Seq), you may or may not do some processing of the logs, and you can implement alerting in any number of ways, and modify that with signals and dashboard alert rules. It's actually quite ripe for a monitoring and alerting infrastructure that is based around applications.
\nBut what if the apps worked together more closely?
\nI'd run Seq for years in a POC single user implementation, but when I started seriously building out the monitoring and alerting infrastructure, I noted that I was accepting some inherent limitations. For example, Seq.App.Opsgenie did not allow for defining priority and responders, and I needed a new instance if I wanted to pass different tags. Seq.App.Atlassian.Jira didn't allow for defining priority and settings assignees, and again, needed different instances to pass different labels.
\nThere's no criticism in that - they were limitations based on the current versions of those apps, and there was opportunity to enhance them, especially after I started creating my own apps to meet our needs. My first serious app, Event Timeout, had a feature - initially quite useless - to set a property containing tags with each timeout. There was nothing to consume them.
\nSo I started to enhance Seq.App.OpsGenie, and indeed the first enhancement was to allow the OpsGenie app to read a property from log events to set tags dynamically - while still allowing a list of static tags that could be combined. Nick Blumhardt was really receptive and collaborative with this, and I followed up with adding my ideas to map Priority and Responder values to event properties (or setting static values if desired).
\nAdding these features made a massive difference to my implementation. Suddenly, with some tweaks to Event Timeout, we were setting the appropriate priority for a given timeout and directing them to the right team - that's no small deal.
\nRecently, I worked with Ali Özgür to extend Seq.App.Atlassian.Jira, and again - really receptive and enthusiastic. It's a pleasure to encounter people from all over the world like this, who see the value in an idea and embrace it - so cool. With Ali's input, we added the ability to pass Priority and Assignee - essentially the same as OpsGenie's Responder - along with Tags. Cool, right?
\nBut it then turned out that Ali also had an existing enhancement request to dynamically pass the Project Key from log properties, and it was only logical that this framework could fit that requirement too ... and then we also added time tracking (Initial/Estimated Time and Remaining Time) and Due Date to the same framework!
\nWhile we were at it, we added the Handlebars functionality from Seq.App.OpsGenie to the Jira app, because that template feature just made sense (and with the latest release, I have some really well formatted Jira isues being created ... some info blanked out in the below screenshot).
So we now have alert/output apps that allow for input and processing apps to pass properties that will be used to direct alerts or issues to the right team or person, with the right priority, with meaningful tags or labels, and even with the right properties to allow creation (since some Jira projects require time tracking) and set due dates.
\nWe also have enhancements coming to Seq itself to allow dashboard alerts to leverage these properties as well, and I've been busy extending my own input and processing apps to pass the properties and populate them in a meaningful way.
\nThis stuff is awesome, and it's all powered by a core strength of Seq - structured logging. Simply making more and better properties available in the logs driving these apps makes it possible for the results to be so much more accurate and targeted, and we can even start using those extra properties for other purposes - what about a Seq Reporter report of all logs that were alerted to a given team? Easy!
\nI've recently worked on an extension to Seq.Input.MSSQL that would allow drawing all these properties from database views and queries, which (if or when merged) will make a great app even better for my purposes, and I hope for others too. Again, there's an existing enhancement request for the MSSQL Input to draw values from properties - in this case, event levels. Quite easy if you're already doing it for other properties.
\nI already make a lot of use of a forked version of the MSSQL app, that has a couple of bug fixes, to drive a bunch of alerts around critical SLAs ... simply put, it gives me the ability to turn database rows into meaningful logs that can be actioned. If the data doesn't quite fit that format, if it needs some transformative effort- a database view is quite viable and incredibly powerful. One of our biggest game changers was to turn the SQL Agent job logs into Seq logs that could tell us when a processing job failed, simply by creating a view that fit the bill. Extending this further to fit the interoperability picture, dynamically directing alerts in the way they need to be handled, is absolute gravy on top of an already great app.
\nI look at this as a big opportunity for Seq to foster a collaborative platform of apps that interoperate well, and I hope to help push that bar even further. Obviously my focus is initially on \"what I know\" and \"what we need\", so Jira and OpsGenie and SQL come into the picture - but this picture doesn't have to be constrained there. Structured logging has an incredible level of power and capability, and Seq as a platform has an exciting future and direction - if it's supported by an app ecosystem with extensive interoperability, so much the better!
\nI suppose this little interaction with Datalust really says it all 😊
", "author": { "name": "MattMofDoom" }, "tags": [ "Windows Logins", "Teams", "Structured logging", "Serilog", "Seq", "SQL", "Reports", "Reporter", "OpsGenie", "MSSQL", "Lurgle.Logging", "Lurgle", "Jira", "Heartbeat", "Handlebars", "GELF", "EventX Trilogy", "EventLog", "Event Timeout", "Event Threshold", "Event Schedule", "Email", "C#", "Apps" ], "date_published": "2021-08-07T19:49:04-07:00", "date_modified": "2022-01-22T16:19:46-08:00" }, { "id": "https://mattmofdoom.com/seq-reporter-v103-who-needs-email-when-you-can-raise-a-jira-issue/", "url": "https://mattmofdoom.com/seq-reporter-v103-who-needs-email-when-you-can-raise-a-jira-issue/", "title": "Seq Reporter v1.0.3 - Who needs email when you can raise a Jira issue?", "summary": "Reporting all the things Seq Reporter is the command-line client that can be used to schedule reporting from your Seq structured logs. It drives a number of daily and monthly reports for us and overall, it works well. We just set our query config and time range, schedule it, and away it goes with no fuss! With the creation of Lurgle.Dates, I included the Seq Reporter date expression logic in this new common date library, so it made it inevitable that I'd update Seq Reporter to use the common library ... but I looked at what else would be useful…", "content_html": "
Seq Reporter is the command-line client that can be used to schedule reporting from your Seq structured logs. It drives a number of daily and monthly reports for us and overall, it works well. We just set our query config and time range, schedule it, and away it goes with no fuss!
\nWith the creation of Lurgle.Dates, I included the Seq Reporter date expression logic in this new common date library, so it made it inevitable that I'd update Seq Reporter to use the common library ... but I looked at what else would be useful for us.
\nA number of our reports are bound for Jira, as part of scheduled monthly SLA reporting. We were emailing them to a Jira mailbox, which would then pick it up and turn it into a ticket. The obvious opportunity here is to instead create the ticket directly from Seq Reporter.
\nIt helped to be using Lurgle.Dates, because we could also leverage the Jira date token logic that I'd merged into this library. I did add an additional method to Lurgle.Dates that allows conversion of these tokens to a DateTime for the purpose of configuring the optional Due Date field in Jira, so Lurgle.Dates v1.0.7 is now out!
\nWith v1.0.3, you'll see that the Test.config file now contains additional config items:
\n\n<add key=\"ValidateTls\" value=\"false\" />
<!-- Email, Jira, EmailAndJira -->
<add key=\"ReportDestination\" value=\"Jira\" />
<!-- Mandatory Jira attributes-->
<add key=\"JiraUrl\" value=\"https://jira.domain.com\" />
<add key=\"JiraUsername\" value=\"Bob\" />
<add key=\"JiraPassword\" value=\"Builder\" />
<add key=\"JiraProject\" value=\"TEST\" />
<add key=\"JiraIssueType\" value=\"Task\" />
<add key=\"JiraPriority\" value=\"Medium\" />
<!-- Optional attributes -->
<add key=\"JiraAssignee\" value=\"BBuilder\" />
<add key=\"JiraLabels\" value=\"Test,Labels\" />
<add key=\"JiraInitialEstimate\" value=\"1d\" />
<add key=\"JiraRemainingEstimate\" value=\"1d\" />
<add key=\"JiraDueDate\" value=\"7d\" />
This is a new config attribute which defaults to true. In v1.0.2, Seq Reporter was always disabling TLS validation because it helped with test scenarios. I've turned that into a configuration because it's generally best practice to validate TLS connections.
\nIf set to false, all TLS validation will be disabled. If set to true - the default - Seq Reporter will validate TLS connections using the standard .NET TLS validation rules.
\nThere are 6 mandatory and 5 optional Jira configs.
\nName | \nDescription | \nRequired? | \n
JiraUrl | \nThe URL of your Jira instance with http:// or https:// prefix. Only the FQDN (eg. https://jira.domain.com) is needed. | \nYes | \n
JiraUsername | \nA username with permission to logon to Jira and create issues via the REST API. | \nYes | \n
JiraPassword | \nPassword for the Jira user | \nYes | \n
JiraProject | \nThe project key to create an issue under | \nYes | \n
JiraIssueType | \nType of issue to create, eg. Task, Service Request, Incident | \nYes | \n
JiraPriority | \nPriority of issue, eg. Highest, High, Medium, Low | \nYes | \n
JiraAssignee | \nA valid Jira username to assign the ticket to | \nNo | \n
JiraLabels | \nA comma-delimited list of labels to apply to the ticket. Don't put spaces in labels. | \nNo | \n
JiraInitialEstimate | \nThe Original Time Estimate field for the Jira issue. If not supported by the project, ticket may fail. | \nNo | \n
JiraRemainingEstimate | \nRemaining Time Estimate field for the Jira issue. If configured, JiraInitialTimeEstimate is also required, or time tracking fields won't be added. | \nNo | \n
JiraDueDate | \nDue Date expressed as a Jira date expression (eg. 7d) | \nNo | \n
If configured correctly, issues will automatically be created with the report attached!
\nThe Templates folder includes an alertJiraTemplate.txt file. This has a Handlebars template that will be used for creating the issue, which allows customisation of the ticket's description.
\nThe Jira issue logic uses the same proxy settings as Seq - so if you disable proxy for Seq, it will be disabled for Jira as well.
\nThe QueryTimeout config setting had an error which meant that it wasn't used. I've fixed that, and also ensured that both the Seq query and the underlying HttpClient are set to the QueryTimeout. That should address queries failing before it's expected.
\nAs always, you can download the latest version via the below fancy links. I may yet update Seq.Client.Reporter to include delivering reports via SFTP, but I'm happy with this new enhancement!
", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Structured logging", "Seq", "Reports", "Reporter", "Lurgle.Logging", "Lurgle.Dates", "Lurgle.Alerting", "Lurgle", "Liquid", "Jira", "Handlebars", "C#", "Apps" ], "date_published": "2021-08-03T20:40:55-07:00", "date_modified": "2022-01-22T15:39:06-08:00" }, { "id": "https://mattmofdoom.com/updates-to-event-timeout-event-threshold-and-event-schedule-for-seq-and-introducing-lurgledates/", "url": "https://mattmofdoom.com/updates-to-event-timeout-event-threshold-and-event-schedule-for-seq-and-introducing-lurgledates/", "title": "Updates to Event Timeout, Event Threshold, and Event Schedule for Seq ... and introducing Lurgle.Dates!", "summary": "Grr ... Bugses. We hates them precious. There's nothing worse than code that doesn't quite work as intended. Well, that's not quite true. If you have 3 products with common code that doesn't work as intended, that's probably worse. Anyway, I noticed that Event Schedule fired prematurely on the last day of the month, instead of the first day of the month as scheduled. Which, of course, meant that when I checked Event Timeout, the instances that had settings for the first day of the month (different timeouts) were firing on both the last and first day of the month.", "content_html": "
There's nothing worse than code that doesn't quite work as intended. Well, that's not quite true. If you have 3 products with common code that doesn't work as intended, that's probably worse.
\nAnyway, I noticed that Event Schedule fired prematurely on the last day of the month, instead of the first day of the month as scheduled. Which, of course, meant that when I checked Event Timeout, the instances that had settings for the first day of the month (different timeouts) were firing on both the last and first day of the month. How annoying, and wrong.
\nWhen I blogged about Event Schedule being released, I foreshadowed that the rapid development of Event Threshold and Event Schedule had led to features being duplicated from Event Timeout, and that I would likely move to a common library.
\nThis bug provided the rationale to do so. Rather than fixing a bug three times in three codebases, it made sense to consolidate to that single library and fix it once.
\nSo Lurgle.Dates was born. I'm going to dive into the functionality available for Lurgle.Dates, but if you're only interested in the EventX updates, skip to Updates to EventX Trilogy.
\nThe first thing to go into Lurgle.Dates is, of course, the common date logic that is used by the \"EventX Trilogy\" - Event Timeout, Event Threshold, and Event Schedule. This is logic that, given a string of date expressions, a start time, format of the start time, and the current date/time, will give you a list of dates for those date expressions.
\nThis was where the bug fix lay. The mistake I'd made in Event Timeout, and duplicated into Event Threshold and Event Schedule, was to rely on a list of integers for the day of month. It \"works\" but it can fail at various date boundaries (because not all months are the same length).
\nI'd put off the \"real\" fix when I only had one codebase to work with, and instead worked on tweaking the logic. But in short - the real fix is to always deal in datetime, so that the date and time you're relying on is correct.
\nSo if I pass
\ndates = Dates.GetDaysOfMonth(\"first\", \"9:00\", \"H:mm\", DateTime.Now);
I should be able to expect that the single date returned in the list is the 1st of next month, 9:00am. Moving to a List<DateTime>
does just that.
Now, in moving to a common library, it makes sense to cater for both local and UTC dates. So if I pass
\ndates = Dates.GetUtcDaysOfMonth(\"first\", \"9:00\", \"H:mm\", DateTime.Now);
I'll get that first day of the month in UTC+0, rather than local time. That's what we need when working with Seq.
\nThis is the guts of the EventX date inclusion and exclusion capabilities. This has some strong flexibility in what you can provide as a date expression, eg.
\nThe Dates class also includes a method to return a list of days of week, given a string of day names, start time, and format of the start time. We provide methods for local and UTC calculations here as well;
\ndaysOfWeek = Dates.GetDaysOfWeek(\"Monday,Tuesday,Wednesday,Thursday,Friday\", \"9:00\", \"H:mm\");
daysOfWeek = daysOfWeek = Dates.GetUtcDaysOfWeek(\"Monday,Tuesday,Wednesday,Thursday,Friday\", \"9:00\", \"H:mm\");
The GetUtcDaysOfWeek call is particularly useful for Seq purposes, because it will shift the list to reflect the correct day in UTC time given your start time (eg. Monday 9:00am would be Sunday 11:00pm when converted to UTC from the AEST+10 timezone). Otherwise - it's a fancy string to enum converter.
\nOf course, the EventX Trilogy have the AbstractAPI Holidays API integrated, to allow automatically excluding public holidays in your locale. This has now been moved into Lurgle.Dates.
\nThis is implemented using Flurl.Http and includes the ability to specify a proxy configuration when you don't have direct connectivity. This is simply done via
\nWebClient.SetConfig(AppName, useProxy, proxyUrl, proxyUserName, proxyPass, bypassLocal, localAddresses);
where useProxy and bypassLocal are boolean, and localAddresses is an array of URLs to exclude from proxy. Only AppName and UseProxy have to be specified, so if you don't need proxy, just pass
\nWebClient.SetConfig(AppName, useProxy)
The AppName is used to set the UserAgent field in the HTTP request.
\nTo retrieve the AbstractAPI Holidays for today, you'll call
\nvar result = WebClient.GetHolidays(ApiKey, \"AU\", DateTime.Today).Result;
And to handle the resulting List<AbstractApiHolidays>
:
var holidays = Holidays.ValidateHolidays(result, \"National,Local\", \"Australia,New South Wales\", includeBank, includeWeekends);
which will validate the result using your rules for holiday type (only match National and Local in the above sample) and locations (only match Australia and New South Wales). IncludeBank and IncludeWeekends are boolean to direct whether to include holidays that match \"Bank Holiday\", and holidays that fall on weekends.
\nThe end result is a List<AbstractAPIHolidays>
that you can reference with;
\nforeach (var holiday in holidays)
{
var utcStart = holiday.UtcStart
var utcEnd = holiday.UtcEnd
var localStart = holiday.LocalStart
}
which are DateTime values for the holidays that are matched by your rules.
\nIntroduced with Event Schedule, I added the ability to include Date tokens and expressions in the Summary, Description, and Tags properties that are logged when a schedule triggers.
\nDateTokens.ValidDate(DateString);
This simply validates that a string has been passed as \"yyyy-MM-dd\" format via a simple regex, and returns true/false.
\nDateTokens.ValidDateExpression(DateExpressionString);
This validates that a date expression has been passed with at least one of the following values (spaces are optional, because we can correct those with the next method).
\n1d 1h 1m
where d = day, h = hour, and m = minute. Obviously you can specify any valid numeric value for day, hour, and minute. You want a date expression of 9000h? Go for it.
\nDateTokens.SetValidExpression(DateExpressionString));
This simply makes sure that we have a valid date expression that includes spaces between the day, hour, and month expressions. This is needed when you send these expressions to Jira, for example.
\nDateTokens.HandleTokens(valueList, tokenKeyPair);
DateTokens.HandleTokens(message, tokenKeyPair);
DateTokens.HandleTokens(message);
This is where the magic of Event Schedule comes from. You can specify more or less any valid .NET custom date string as a token in the Summary, Description, and Tags fields. Optionally, you can append modifiers to the end to subtract days, months, or years.
\nThe examples I gave when introducing this feature still apply;
\nAnd you can specify simple date tokens as well, to return parts of a date and optional add/subtract:
\nbut bear in mind that if you try to calculate last months' dates with simple date tokens, you'll wind up disappointed. That's what the .NET custom date logic with modifiers is for.
\nAs of v1.0.7, you can easily turn these expressions into datetimes, using DateTokens.CalculateDateExpression(dateToken);
On top of this, we include {LogToken} and {LogTokenLong} tags that can optionally be passed to HandleTokens as a keypair. Simply put, if your message includes those tags, whatever you pass in the keypair will be applied as follows:
\nWhich makes the Event Schedule functionality of multi-log tokens possible, since it can maintain a dictionary of LogToken=LogTokenLong values.
\nI dropped the initial \"custom tokens\" idea from this method, because frankly, if since I already had the ability to set a single template summary/description/tag with multiple date expressions and the log token keypair, the static list of custom tokens really added nothing useful.
\nThis is where I branched out and pulled in code that had been created for Seq Reporter. I needed a fairly flexible date expression scheme that would allow me to specify a period and time. So I created something that allowed expressions like;
\nwhich is nice enough if you want to calculate based on the current time, but I also allowed for hybrid date and time expressions;
\nAs well as a best effort to parse a specific date and time string into a DateTime, using the local culture rules.
\nFor the uplift to Lurgle.Dates, I created both UTC and local methods. Seq.Client.Reporter operates in UTC to be consistent with Seq, and converts local to UTC as part of its methods, but having clarity between UTC or local is useful - hence the calls look like;
\nDateParse.GetDateTime(\"30d\"))
DateParse.GetDateTimeUtc(\"30d 9:00\")
DateParse.GetDateTime(\"-30d\"))
DateParse.GetDateTimeUtc(\"-30d 9:00\")
DateParse.GetDateTime(\"+30d\")
DateParse.GetDateTimeUtc(\"+30d 9:00\")
Note that for Lurgle.Dates, I've added the ability to specify a \"+\" or \"-\" operator, to instruct whether to add or subtract the date expression. Since the original functionality was to subtract, subtract is the default without these operators - you'll get 30 days ago.
\nSo that's Lurgle.Dates in a nutshell. It's a powerful library of date expressions, tokens, and parsing that powers multiple projects - starting with Event Timeout, Event Threshold, and Event Schedule.
\nEvent Timeout and Event Threshold particularly benefit, both from the bug fix and from uplifting to a level of feature parity with Event Schedule where it makes sense. This means that integration with other apps, like Seq.App.OpsGenie and Seq.App.Atlassian.Jira, can potentially benefit - although I need to circle back on the OpsGenie app and see if I can pass those properties in a way that's useful to OpsGenie (unlikely with how OpsGenie APIs are structured, though).
\nYou may want to read this important note before updating these two - this issue \"shouldn't\" occur, but if it does, it's a simple fix by editing and saving your instance settings.
\nYou may need to update your Nuget API feed to see the Event Timeout update, due to an issue with the Nuget v2 API.
\nPowerful capabilities and consistent date handling via a common library - hard to argue with, right? Here's a collection of fancy links, but of course to install or update the EventX apps in Nuget, all you need is the Nuget ID which is also shown below.
Seq.App.EventTimeout | \n|
---|---|
Seq.App.EventThreshold | \n|
Seq.App.EventSchedule | \n|
Lurgle.Dates | \n
Update - The specific cause of the below problem was subsequently identified as Event Timeout recently reaching 100 Nuget versions, which meant that that the new versions were on the next page of results. The Seq implementation of Nuget wasn't handling paged results correctly.
Many thanks to Joel Verhagen from the Nuget team, and Nicholas Blumhardt and Ashley Mannix over at Datalust for working together to pinpoint the issue!
I encountered a weird problem with releasing a new update to Event Timeout which appears to be related to the Nuget v2 API. Essentially, the Nuget v2 API will only return v1.4.8 as the latest version, no matter what I do - even if 1.4.8 is unlisted.
\nDiscussing the issue with Datalust, we were all able to confirm the problem and identify that switching to the v3 API would resolve the problem. Seq defaults to the Nuget v2 API on Windows but can be changed. Seq 2021.3 will automatically update v2 feeds to v3 when released.
\nTo do this, go to your Seq Settings, and select Feeds.
\n\nClick on the nuget.org link, and update the Location from
\nhttps://www.nuget.org/api/v2/
\nto
\nhttps://api.nuget.org/v3/index.json
\n\nAnd click Save Changes.
\nYour Nuget packages will still install, but any that had an issue with updating (like Event Timeout) will now update!
\nI have sent a message to the Nuget maintainers regarding the v2 API issue so it can hopefully be resolved in the interim.
", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Seq", "Event Timeout", "Apps" ], "date_published": "2021-08-01T17:55:34-07:00", "date_modified": "2021-08-04T15:40:16-07:00" }, { "id": "https://mattmofdoom.com/note-updates-to-event-timeout-and-event-threshold-may-cause-instances-to-stop/", "url": "https://mattmofdoom.com/note-updates-to-event-timeout-and-event-threshold-may-cause-instances-to-stop/", "title": "NOTE: Updates to Event Timeout and Event Threshold may cause instances to stop", "summary": "I've been working on new updates to the EventX Trilogy - Event Timeout, Event Threshold, and Event Schedule - which I will blog about in more detail, but I wanted to put a quick note out about an issue that may occur when you update to Event Timeout v1.5.1 and Event Threshold v1.0.7. I ported the \"Include description with log message\" setting across. This didn't exist in previous versions, and it's possible that the setting missing may cause your instance to stop. If this does occur, simple open the settings for each instance and save. Optionally, you may wish to…", "content_html": "I've been working on new updates to the EventX Trilogy - Event Timeout, Event Threshold, and Event Schedule - which I will blog about in more detail, but I wanted to put a quick note out about an issue that may occur when you update to Event Timeout v1.5.1 and Event Threshold v1.0.7.
\nI ported the \"Include description with log message\" setting across. This didn't exist in previous versions, and it's possible that the setting missing may cause your instance to stop.
\nIf this does occur, simple open the settings for each instance and save. Optionally, you may wish to enable \"Include description with log message\" to preserve the behaviour of old versions.
\nI've made a change which will hopefully avert this from happening with v1.5.1 and v1.0.7, but it doesn't hurt to put out a quick note.
\nI also found that Seq wasn't finding the new version of Event Timeout - if this happens, go into the app's Manage and manually type 1.5.1 and then click Update.
Seq.App.EventTimeout | \n|
---|---|
Seq.App.EventThreshold | \n|
Seq.App.EventSchedule | \n
After the last update to Seq.Client.WindowsLogins, in which I cursed the very existence of EventLog().EntryWritten, the Seq Client for Windows Logins has proven to be extremely reliable, and we've been really happy with it. It does exactly what it's supposed to do - it logs events to Seq when an interactive user logs into a given server.
\nThis works quite well with a Seq app like Seq.App.Opsgenie or Seq.App.Atlassian.Jira. Recently I've been engaged with the authors in making some updates to these applications and my own, to make them interoperate better. I plan to write about the JIra enhancements soon, but the net result is that apps such as Event Schedule can log properties that are automatically picked up by these apps, such as responder/assignee, tags, priority, and so on. This drives better integration with the upstream application, along with allowing for better structured properties and more detailed templates!
\nSeq Client for Windows Logins is a great candidate to provide these kind of properties to a Seq app. We want to raise a Jira ticket or an Opsgenie alert if a login happens, and the opportunity arises to allow Seq.Client.WindowsLogins to tell those apps who the ticket/alert should be assigned to, what tags and priority to use, etc.
\nAccordingly, I've now updated the service to include configuration items for the following properties:
\nThese are static configuration items within the Seq.Client.WindowsLogins.exe.config configuration file, and are not validated - whatever you set is what will be logged. If the property is not valid for the Opsgenie or Jira app, you may see a failure in the debug logs for those apps.
\nI have also added the EventTimeLong and EventTimeShort properties. These are formatted strings of the EventTime property, as a convenience for inclusion in the Handlebars templates for both the Jira and OpsGenie apps.
\nAn EventTime of 2021-07-29T09:58:42.3749981+10:00 will result in the following locale-dependent properties:
\nand you can readily reference these as {{EventTimeLong}} or {{EventTimeShort}} in your templates, as with any property.
\nThis provides more control over your integrations, while cutting down on the number of OpsGenie or Jira instances needed - you can easily set up signals that will pick up multiple applications with valid properties to drive a single app instance.
\nYou can download Seq.Client.WindowsLogin v1.1.3 from the below fancy links!
\n
", "author": { "name": "MattMofDoom" }, "tags": [ "Windows Logins", "Updates", "Seq", "OpsGenie", "Lurgle.Logging", "Lurgle", "Jira", "Handlebars", "C#", "Apps" ], "date_published": "2021-07-28T22:40:53-07:00", "date_modified": "2022-01-22T15:49:22-08:00" }, { "id": "https://mattmofdoom.com/event-schedule-v1018-schedule-multiple-log-entries-and-jira-related-enhancements/", "url": "https://mattmofdoom.com/event-schedule-v1018-schedule-multiple-log-entries-and-jira-related-enhancements/", "title": "Event Schedule v1.0.18 - Schedule multiple log entries, and Jira-related enhancements!", "summary": "Ok, so we got through the date token and calculation updates. Now, I finally get to use this meme for something: We continue on with our march through turning Event Schedule for Seq into its own distinct and unique entity that works with other apps to provide a useful automation tool. This time, I contemplated that it seems fairly redundant to have to configure multiple instances of Event Schedule to create multiple Jira issues. What if we could just configure one instance from a common template? (Be a lot cooler if we could ...) I've added the following features; This…", "content_html": "
Ok, so we got through the date token and calculation updates. Now, I finally get to use this meme for something:
We continue on with our march through turning Event Schedule for Seq into its own distinct and unique entity that works with other apps to provide a useful automation tool. This time, I contemplated that it seems fairly redundant to have to configure multiple instances of Event Schedule to create multiple Jira issues. What if we could just configure one instance from a common template? (Be a lot cooler if we could ...)
\nI've added the following features;
\nThis allows a Multi-log token setting of:
\nIT4=IT Maintenance Task - Servers,\n
IT30=IT Maintenance Task - Patching,
IT39=IT Maintenance Task - Monitoring
and a Responders setting of:
\nIT4=JSmith,\n
IT30=BJones,
IT39=JDoe
with the Scheduled log message:
\n{LogToken} - {LogTokenLong} - {MMMM yyyy-1m} - Review\n
and the Scheduled log description:
\nPlease check \"{LogTokenLong}\" for {MMMM yyyy-1m}.\\n\\nThis ensures we have a working environment.\n
and the Scheduled log tags:
\n{LogToken},IT_{MMMyy-1m}\n
to cause 3 distinct issues to be raised in the target Jira project via the Seq.App.Atlassian.Jira app, and automatically assigned to the correct person! This operates according to your configured schedule - so 30 tickets correctly created at the start of the day, week, month becomes child's play! If you desperately wanted, you could automatically create them every hour (uh, this is not a good idea, don't do this).
\nOn top of this, I've also added some features that are Jira-related (but could be used for other purposes with similar apps).
\nThis, combined with some updates to Seq.App.Atlassian.Jira that are pending merge, means that a single Jira app instance will be able monitor one or more Event Schedule instances and create issues in the way required by the target project(s), even where due date and/or time tracking features are required. I'll talk about those updates soon.
\nI've built time expressions into this, so you can express the estimates and due date as valid Jira date expressions, eg.
\n1d 1h 1m\n
1h 1m
1m
1d
1d 1m
You can also specify a valid yyyy-mm-dd date for due date - but using a date expression will automatically translate to a valid due date, so is likely preferable.
\nThis enhancement means that leveraging Seq as a key piece of automation by using log entries to drive app interactions - like Jira - is even easier, and incredibly powerful!
\nYou can update your existing install from within Seq, install Event Schedule to your Seq instance using the Seq.App.EventSchedule Nuget tag, or otherwise - the fanciness of the below links is well known.
", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Structured logging", "Seq", "Jira", "EventX Trilogy", "Event Schedule", "C#", "Apps" ], "date_published": "2021-07-28T01:26:30-07:00", "date_modified": "2022-01-22T16:14:24-08:00" }, { "id": "https://mattmofdoom.com/event-schedule-v1010-more-complex-date-calculations/", "url": "https://mattmofdoom.com/event-schedule-v1010-more-complex-date-calculations/", "title": "Event Schedule v1.0.10 - Date calculation expressions!", "summary": "After posting the last release of Event Schedule earlier today, I gave some more consideration to the implications of how the date calculations work. The problem, as touched on in the post, is that not all date calculations are equal. If, on the first of January 2022, I expect that Event Schedule will apply { MMMM-1} {yyyy} and return December 2021 - I'm going to be disappointed. That's a simple example, as is the last day of a given month - we can't calculate dates in isolation for some case. I thought about baking in a \"month\" filter to Event…", "content_html": "
After posting the last release of Event Schedule earlier today, I gave some more consideration to the implications of how the date calculations work.
\nThe problem, as touched on in the post, is that not all date calculations are equal. If, on the first of January 2022, I expect that Event Schedule will apply { MMMM-1} {yyyy} and return December 2021 - I'm going to be disappointed. That's a simple example, as is the last day of a given month - we can't calculate dates in isolation for some case.
\nI thought about baking in a \"month\" filter to Event Schedule, so you could do multiple instances of the same schedule for scenarios. But to be frank - that sucks.
\nSo I spent a little more time to better support the more complex scenarios.
\nIn short, Event Schedule will now accept formatted date strings with a modifier at the end. There is a caveat - these strings must be capable of being formatted as .NET custom date strings - excepting, of course, the calculation that is permitted at the end.
\nIn short, you will specify date strings, and (optionally) append a calculation to the end. The following examples would be valid (and are used as test cases):
\nIn short - the calculation is based on adding or subtracting a number of d (days), m (months), or y (years). This will allow for more power in calculations - still with some possible limitations, but ones that could only be answered with date expressions (such as \"last day of month\"). I'm not sure that's strictly necessary for most usages.
\nWe allow \"/\" and \"-\" as separators within the date expression, so {d-M-y}, {d/MM/yy}, and {dd MMM yyyy} are all valid. As noted - the date calculation is optional, so you can structure a formatted date with no calculation as a result of this change.
\nWe also allow for the short or long day name to be prepended, with ddd or dddd:
\n{ddd d MMM yyyy}
\nDay is optional, but month and year are always required in these date expressions. If you want to specify just month or year - use the simple date tokens.
\nHopefully this helps make it even easier to use Event Schedule to log a formatted event to be handled by your Seq apps! I think it's now quite likely that these capabilities will make their way to Event Timeout and Event Threshold to allow you to include dates in your configured log messages.
\nYou can update your existing install from within Seq, install Event Schedule to your Seq instance using the Seq.App.EventSchedule Nuget tag, or otherwise - fanciness ensues below:
Yesterday's releases for Event Schedule introduced the ability to include simple date tokens in the Scheduled log message
, Scheduled log description
, and Scheduled log tags
fields.
These were based entirely on returning the current day, month, or year - but what if you want to reference last month, next month, yesterday, or 10 years ago?
\nI've therefore allowed simple addition and subtraction operations on the date tokens, so if you want to express last month as a long name, you can simply configure {MMMM-1}. If the current month is July, then it will return June.
\nThis is not a fix for all scenarios, because it's not applying to the whole date. If you want to express:
\n{D} {MMM-1} {yyyy-1}
\nand run this on the 1st of every month - that should be okay, it will return;
\n1 Jun 2020.
\nBut if you want to do the same on the last day of the month, you might run into trouble given that the previous month end could have been 28, 29, 30, or 31. We don't handle that, because these are simple tokens and not full date expressions.
\nNonetheless, this is a useful addition that can provide cases like creating a Jira ticket titled \"Scheduled Maintenance Review for June 2021\" on the 1st July.
\nAs the tokens are still a new feature, I've updated the token reference table from yesterday below!
\nToken | \nSubstitute | \n
{d} {d+[nn]} {d-[nn]} | \n\n Current Day as digit (eg. 1, 26, etc) | \n
{dd} {dd+[nn]} {dd-[nn]} | \nCurrent Day as digit with padding zero where needed (eg. 01, 26, etc) Optionally add/subtract days | \n
{ddd} {ddd+[nn]} {ddd-[nn]} | \nCurrent short day name (eg. Mon, Fri, etc) Optionally add/subtract days | \n
{dddd} {dddd+[nn]} {dddd-[nn]} | \nCurrent long day name (eg. Monday, Friday, etc) Optionally add/subtract days | \n
{M} {M+[nn]} {M-[nn]} | \nCurrent Month as digit Optionally add/subtract months | \n
{MM} {MM+[nn]} {MM-[nn]} | \nCurrent Month as digit with padding zero where needed Optionally add/subtract months | \n
{MMM} {MMM+[nn]} {MMM-[nn]} | \nCurrent short month name Optionally add/subtract months | \n
{MMMM} {MMMM+[nn]} {MMMM-[nn]} | \nCurrent long month name Optionally add/subtract months | \n
{yy} {yy+[nn]} {yy-[nn]} | \nTwo digit year (eg. 20, 21, etc) Optionally add/subtract years | \n
{yyyy} {yyyy+[nn]} {yyyy-[nn]} | \nFour digit year (eg. 2020, 2021, etc) Optionally add/subtract years | \n
As noted yesterday - although tags are presented above as lower and upper case, the replacement is not case sensitive. This does not strictly follow standard .NET formatting rules - it is a simple token substitution for convenience, with some enhancements to allow addition and subtraction.
\nI've also added a checkbox to allow you to enable or disable including the Description in the log message text. By default it is switched off.
\nReminder - If you have an existing install of Event Schedule with existing instances configured, the instances may stop due to addition of the Include description with log message
field.
To resolve, you simply need to go into each app and save the config - remember to turn the Include Description with log message
setting on if you want the previous behaviour to continue!
You can update your existing install from within Seq, install Event Schedule to your Seq instance using the Seq.App.EventSchedule Nuget tag, or reflect on the fanciness of the links below!
Event Schedule is a bit of a different beast from the Event Timeout and Event Threshold apps. It's an app that will (for example) allow you to automatically create Jira tickets every month via a Seq log entry. This means that it can have difference needs from the other apps (or otherwise, might drive improvements to those apps if it's useful enough).
\nWith this in mind, it's useful for Event Schedule to be able to mark those tickets with date formats, like \"Scheduled Maintenance - July 2021\". This allows you to readily distinguish between different tickets in the destination system.
\nNote - After this was initially published, I looked at event tags as another possibility. Given that a main target is Jira, and updates to Seq.App.Atlassian.Jira mean that tags can be passed through as labels, it can be useful to produce a label like Maintenance_Jul2021 for the sake of filtering. Accordingly, v1.08 now includes the ability to include date tokens in your tags, along with an update to ensure that the Message and Description properties will always be included with your scheduled log events!
\nAccordingly, I've updated the Event Schedule code to allow date tokens, as follows;
\nToken | \nSubstitute | \n
{d} | \nCurrent Day as digit (eg. 1, 26, etc) | \n
{dd} | \nCurrent Day as digit with padding zero where needed (eg. 01, 26, etc) | \n
{ddd} | \nCurrent short day name (eg. Mon, Fri, etc) | \n
{dddd} | \nCurrent long day name (eg. Monday, Friday, etc) | \n
{M} | \nCurrent Month as digit | \n
{MM} | \nCurrent Month as digit with padding zero where needed | \n
{MMM} | \nCurrent short month name | \n
{MMMM} | \nCurrent long month name | \n
{yy} | \nTwo digit year (eg. 20, 21, etc) | \n
{yyyy} | \nFour digit year (eg. 2020, 2021, etc) | \n
This is accepted in both the Message and Description fields, and within Tags. Although presented above as lower and upper case, the replacement is not case sensitive. This does not strictly follow standard .NET formatting rules - it is a simple token substitution for convenience.
\nI've also added a checkbox to allow you to enable or disable including the Description in the log message text. By default it is switched off.
\nNote! If you have an existing install of Event Schedule with existing instances configured, the instances may stop due to addition of this field.
To resolve, you simply need to go into each app and save the config - remember to turn the \"Include Description with log message\" setting on if you want the previous behaviour to continue!
You can update your existing install from within Seq, install Event Schedule to your Seq instance using the Seq.App.EventSchedule Nuget tag, or use the most fancy links in the universe below!
With the work that I've done on my Seq apps (specifically Event Timeout and Event Threshold), I've managed to build quite a robust and versatile date and time system which can be set for some complex scenarios. The properties built into these apps are extensive, with an ability to set start and end times, days of week, day of month (using versatile date expressions), and automatic public holiday calculation and behaviour using the Abstract API Holidays API. And, of course - repeating timeouts (for Event Timeout ... Event Threshold inherently needs to repeat its intervals, dependent on how you configure it).
\nThe net effect is not dissimilar from Cron expressions, but possibly a bit more accessible within the context of the Seq app settings page.
\nThese are apps that, essentially, monitor an event stream and then output a configured event to the stream. You use this to fire off alerts or actions using another Seq App, like Seq.App.OpsGenie, Seq.App.Atlassian.Jira, or Seq.App.EmailPlus ... just a few of the apps that I use for various purposes that help us crush it with critical SLAs!
\nSo once you've built the groundwork for powerful scheduling, another possibility comes to mind; firing off scheduled tasks from Seq, and inadvertently creating a trilogy of related apps!
\nContemplating a requirement of simply scheduling a log event that can be picked up by other apps was really simple. Essentially the requirements were;
\nHaving an \"end time\" didn't make as much sense (at this stage) for the new app, so the following decisions were made;
\nFor the most part - creating this app was a matter of stripping code out, because we simply don't need to monitor logs from other apps, and we don't need to match properties in log events. If you configured an Event Schedule instance to stream incoming events, it would do nothing with those events. There doesn't appear to be a way to suppress that option in the Seq app settings page.
\nHaving common features across three Seq apps makes it likely that I'll move to a common library for the shared features, but I firstly wanted to get Event Threshold, and then Event Schedule, off the ground and working.
\nOverall, this was probably the simplest app to create, but one which will power a few key needs in our environment - like simply creating Jira tasks in specific projects on specific days of the month. There are automation options within Jira that can do this, dependent on what you have available for use, but this approach means that we can also see logs of the issues being created, and integrate it within our monitoring and alerting approach.
\nThe configuration for Event Schedule should be quite familiar if you've used Event Timeout or Event Threshold, albeit somewhat shorter given the reduction in settings as a result of not monitoring events.
\n\nAs you might expect, it's not hard for this app to work as intended - there's a fairly low bar that it has to meet, and it simply depends on how you've configured your time/weekday/date expression/public holidays.
\n\nBut as a bonus, here's a sneak peek of how it will look when sent via Seq.App.Atlassian.Jira with changes that I've contributed to the project (leveraging the work on, and features of, Seq.App.OpsGenie)!
\n\nThe result above is a result of the template that was set, but it should give a good idea of how powerful Event Schedule could be.
\nThe moment I know you've been waiting for. You can install Event Schedule to your Seq instance using the Seq.App.EventSchedule Nuget tag, or use the most fancy links in the universe below!
A lot of scripting can be involved in running an IT operation. While your business applications may be logging to Seq, is it any less important that you have visibility of key scripts? What happens when they fail and endanger your critical SLAs?
\nAt a fundamental level, logging is really important. Often, shell scripts already have logging - but this is to a text log file trapped somewhere on disk. You might send it to a SIEM or other logging server for ingestion, but what if you could treat your shell script like any other application?
\nEnter seqcli, a multi-function and multi-faceted command line tool. I touched before on its functionality that I initially modelled against for Seq Reporter, but one of its most basic functions is logging to Seq.
\nAs a .NET Core app, you can run seqcli on Windows, Linux, and OS X .. and it works great! I've integrated a number of Linux shell scripts to Seq using it. The key thing to remember is that Seq is at its best with well structured logs; exposing variables and properties to Seq for indexing and querying is always ideal. You can do that readily with seqcli ... and suddenly, getting an alert to something such as OpsGenie becomes simple!
\nIn the sample script below, you'll note that I tend to try to pull in relevant environment variables from the Linux scripts for sending to Seq- this in turn allows script troubleshooting and debugging without having to directly log in.
\nLogging is only as good as you make it, of course - if you aren't logging each stage of a script (start, processing, end), for example, you won't necessarily find it easy to diagnose where something failed. And more information is better - okay, a file transfer failed, but what file? Where from? Where was it being sent to?
\nAs a principle, try to standardise logging as far as possible. You'll see a familiar AppName
property in the below script, which I use in Lurgle.Logging, the Log4j appender, Seq Reporter, Seq Client for Windows Logins, and Seq apps like Event Timeout, Event Threshold, and OpsGenie Heartbeat. This is valuable data that helps build signals, dashboards, and alerts - being able to differentiate between applications that log to Seq is important. Equally, I often include an Environment
or MachineName
property that allows differentiation between environments for similar reasons.
The logEntry
function below is structured to default to Information events, with the ability to pass other log levels such as Warning or Error. It can be pasted into a shell script (eg. Bash, Ksh, etc), but it's always worth taking a look at what environment variables are being set. Including those as Seq properties is simple - just add another -p PropertyName=\"$VariableName\"
- and they're invaluable for debugging.
The command line can wind up quite long with this approach, but of course you can split it into multiple lines (like below) and I can't recommend doing this enough.
\n#------------------------------------------------------------#
# function <logEntry> :: Log to Seq #
#------------------------------------------------------------#
function logEntry {
if [ -z \"$3\" ]
then
errorlevel=\"Information\"
else
errorlevel=$3
fi
if [ -z \"$2\" ]
then
messageTemplate=\"{Summary}\"
else
messageTemplate=\"{Summary} - {Description}\"
fi
/etc/seq/seqcli log -l \"$errorlevel\" -m \"{Summary} - {Description}\" \\
-p Summary=\"$1\" -p Description=\"$2\" -p AppName=\"Linux Script Name\" \\
-p Property=\"$Property\" -p Property2=\"$Property2\" -p Property3=\"$Property3\" \\
-s https://seq.domain.com -a \"<apikey>\"
}
Property=\"Important Property\"
Property2=\"Stuff used in script\"
Property3=\"Relevant info\"
logEntry \"An error occurred\" \"Stuff happened!\" \"Error\"
Seq has an innate ability to alert based on a simple count of events in a signal, using dashboard widgets with alerts. We use that for a number of alerts, including detection of possible upstream outages - when we receive less traffic (measured in log entries) than normal from upstream over a given interval, we can send an alert that there's a problem.
\nThe challenge comes when you want to measure only between specific times. For example, you have logs that you can derive a count of files transferred from, and you want to measure that a scheduled transfer between 4:00am and 4:30am had at least 100 files, and alert if it falls below that.
\nWhich - of course - was a requirement that arose. Our logging, monitoring, and alerting is now so robust that the business want more. \"Can Seq tell us if ...\" is reasonably common - and the answer is generally yes. There's a massive amount of information that can be derived from Seq, even from logs that are less than ideal in structure. The question is always - how can I do this within the current features and capabilities? If you come up short, you're probably going to need an app.
\nIn saying that, the Seq Reporter console app goes quite a way to making these kinds of requests fairly mundane. A scheduled report is certainly an option to fill many requirements ... but it's not structured to provide alerts based on counting logs over defined intervals. In that instance, you probably need a Seq app.
\nSo to outline the requirements that arose;
\nThe list of requirements starts to look a lot like Event Timeout. The major difference is that Event Timeout is primarily structured to look for events that did not happen at all. While it maintains counts of events, this is based on making a positive match; if that match doesn't occur, raise an alert.
\nEvent Timeout is a powerful app with a lot of configurability, but I wanted this to be a separate entity with similar features. The logic in figuring out that an event didn't happen and then alerting, versus counting events and alerting if it's under (or over) a threshold, is different enough that I didn't want to try to shoehorn yet another feature into Event Timeout.
\nSo the answer was, firstly, to use the Event Timeout app as our basic structure, and adapt the settings and logic where necessary.
\nEvent Threshold is the result of that adaptation. It benefits quite considerably from the Event Timeout implementation, as you might see from the feature list;
\nEach feature of Event Timeout was evaluated for benefit to Event Threshold. It's useful to be able to configure properties to evaluate towards the threshold count - so the Property 1 - 4 matching was retained.
\nEqually, we already know that our thresholds are different over weekends than weekdays, and that there may be specific times of the month where we want further differentiation - so all the day of week/day of month features make the cut. It's not much of a stretch to consider that public holidays may also be important, so we retain the Abstract API Holidays implementation.
\nEvent Threshold inherently uses repeating intervals, so the \"Timeout interval\" becomes a threshold measuring interval, and we drop the \"Repeating timeouts\" and \"Repeat timeout suppression\" features - they don't belong in Event Threshold.
\nThe net effect is that I can define instances like;
\nThe specificity that is possible means that you can have multiple instances watching the same signal for different criteria, and that you can ensure that you only count the properties that you want.
\nThe configuration is very similar to Event Timeout - you get the power to decide how you want your threshold instance to work.
We wind up with an app that, like Event Timeout, is forward looking and using UTC to calculate the next start event. You can configure it for start and end times up to 24 hours, and use any threshold monitoring interval from 1 second to 24 hours. It benefits from all the work done on Event Timeout, and even led to some minor improvements to both apps for edge cases found during development.
\nYou can see the results of the configuration shown above in the below screenshot. We were looking for events with @Message matching any value over 10 minutes (600 seconds), and alerting if it fell below that threshold.
\nYou can see the alert being raised below, which could then be fired off to email, OpsGenie, Jira ... any Seq alerting app, in short.
Adding the ability to invert the threshold criteria is useful, if you want to measure exceeding a threshold rather than falling under the threshold. Simply put, it changes the calculation from \"<= threshold
\" to \">= threshold
\". I don't show that here - but it's simple logic that makes Event Threshold as versatile as you need.
The net effect is that if you want to get people out of bed because you fell below or exceeded a given volume of events - you can, and it allows you to be as specific as you need to avoid false positives.
\nEvent Threshold has certainly benefitted from the prior work on Event Timeout. It meant that I could get it up and running quickly and easily with a similar and familiar structure that provided the power and capability needed to allow a single app to fill multiple needs with different instances. We will make use of this for a number of scenarios - including, simply, the measurement of different thresholds on weekdays and weekends.
\nI hope that others will benefit from Event Threshold too. You can install it in your Seq instance using the Nuget tag Seq.App.EventThreshold, or use the fanciest links in the known universe below:
I've released a minor update to Seq.App.EventTimeout. While creating a new app that shares common functionality (more on that later!), I came across some edge cases for the handling of holidays.
\nThese cases should not \"usually\" have happened, and were only exposed by running a holiday test case near the end of a UTC day. However, I've fixed the logic that allowed the edge case to occur, and extended the unit testing to test date rollover for each hour in a week, to ensure that the expected result always occurs with the rollover logic.
\nUpdate - After publishing this, I found a case where a configured property match would cause an error for Seq when sending an event that has the property set to null. v1.4.8 is now up to resolve this.
\nEvent Timeout powers a huge amount of critical SLAs in my work place, because certain events must happen before a given time. We're also now using it for several heartbeats, by watching to ensure that events are seen within a given period, so that we can detect problems such as services stopping. It's versatile and powerful, and has proven very stable.
\nYou can download it to your Seq install using the Nuget package id Seq.App.EventTimeout, or otherwise, there is obvious fanciness in the links below!
\n\n ", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Seq", "Public Holidays", "Event Timeout", "C#", "Apps" ], "date_published": "2021-07-15T21:27:26-07:00", "date_modified": "2022-01-22T16:05:19-08:00" }, { "id": "https://mattmofdoom.com/seqclientlog4j-seq-appender-for-log4j-2/", "url": "https://mattmofdoom.com/seqclientlog4j-seq-appender-for-log4j-2/", "title": "Seq.Client.Log4j - Seq appender for Log4j 2", "summary": "I've been working to build a Seq appender for Log4j 2, which will allow Java applications that use Log4j to send events to Seq. While I've been previously been able to configure Log4net instances to send to Seq using Seq.Client.Log4net and Log4net.Async, there seems to be a lack of an equivalent for Java. For Java, the Seq documentation directs you to either use GELF or - assuming you have the option - serilogj. I haven't had much Java development experience in the past, but I started with a sample from this discussion, which gave me some direction towards creating a…", "content_html": "
I've been working to build a Seq appender for Log4j 2, which will allow Java applications that use Log4j to send events to Seq. While I've been previously been able to configure Log4net instances to send to Seq using Seq.Client.Log4net and Log4net.Async, there seems to be a lack of an equivalent for Java. For Java, the Seq documentation directs you to either use GELF or - assuming you have the option - serilogj.
\nI haven't had much Java development experience in the past, but I started with a sample from this discussion, which gave me some direction towards creating a working Log4j appender. Initially I produced the Json output with Gson and Apache Commons Collections, but I trimmed those dependencies back to just use the Jackson package that Log4j 2 already uses.
\nI elected not to use a Log4j layout, although it's likely I could have accomplished this with a JsonLayout
- I wanted to simply build a class around the Seq Json format and pass it to Seq.
I included some of the features of Lurgle.Logging where possible, such as allowing a correlation id to be passed (via the ThreadContext
stack) or automatically generating a per-thread correlation id. For this purpose, I added an ability to configure the correlation id property name, so that if a UUID is being passed by an application that doesn't have the name \"correlationId\", you can still catch it. I also added the ability to disable including a correlation id with events.
The correlation id functionality uses a per-thread cache, like Lurgle.Logging, which has a configurable cache time in seconds. The cache allows the appender to consistently pass the same correlation id for a thread while it's alive. If set to 0, the cache is disabled and a static correlation id is used.
\nI also pass the MachineName
, ThreadId
, MethodName
, and ProcessName
properties to Seq, which helps provide well structured events.
Finally, the Seq appender will add any other items in the ThreadContext
stack as properties, along with any configured via a Property key in the log4j appender config.
Configuration is a fairly straightforward affair, and easily added to an async logger. You can see that the example below contains the configuration properties that are available, as well as an example as adding a property within the config.
\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>
<Configuration packages=\"com.mattmofdoom.logging.log4j2.seqappender\" status=\"WARN\">
<Appenders>
<Console name=\"Console\" target=\"SYSTEM_OUT\">
<PatternLayout pattern=\"%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n\"/>
</Console>
<SeqAppender name=\"SeqAppender\">
<SeqUrl name=\"Url\">https://seq.domain.com</SeqUrl>
<SeqApiKey name=\"ApiKey\"></SeqApiKey>
<AppName name=\"AppName\">Test App</AppName>
<CacheTime>600</CacheTime>
<CorrelationProperty>CorrelationId</CorrelationProperty>
<IncludeCorrelation>true</IncludeCorrelation>
<Property name=\"Example\">Example Property</Property>
</SeqAppender>
</Appenders>
<Loggers>
<AsyncRoot level=\"all\">
<AppenderRef ref=\"Console\"/>
<AppenderRef ref=\"SeqAppender\"/>
</AsyncRoot>
</Loggers>
</Configuration>
I contemplated some interesting features for this, such as parsing log messages to extract properties and potentially transform them into fully structured logs - however some tinkering with this in Seq.Client.Log4net discouraged me after I found that the added overhead produced less than ideal outcomes. It's not absolute that this would occur in Log4j, but I considered that this was merely a 'nice to have'.
\nI don't have an application that I can readily add this appender to, but it passes unit testing quite happily and logs to a Seq instance.
\nYou can grab the code from Github to compile - feedback welcome!
\n", "author": { "name": "MattMofDoom" }, "tags": [ "log4j2", "log4j", "appender", "Structured logging", "Seq", "Java", "Apps" ], "date_published": "2021-07-12T19:30:23-07:00", "date_modified": "2021-07-12T19:40:46-07:00" }, { "id": "https://mattmofdoom.com/lurglelogging-v122-destructure-and-mask-structured-properties/", "url": "https://mattmofdoom.com/lurglelogging-v122-destructure-and-mask-structured-properties/", "title": "Lurgle.Logging v1.2.2 - Destructure and mask structured properties!", "summary": "Update After the original post, I tackled another item I'd been meaning to look at - being able to configure proxy settings for the Serilog Seq sink. Lurgle.Logging v1.2.3 now includes additional optional configurations for the Seq sink's proxy. This is particularly useful for console apps like Seq Reporter, to ensure they don't attempt to use your proxy config when logging to Seq. You can configure the new settings in your app.config (as per below) or via the LoggingConfig() constructor with Logging.SetConfig()! <!-- Optional Seq proxy settings --> <add key=\"LogSeqUseProxy\" value=\"false\" /> <add key=\"LogSeqProxyServer\" value =\"\" /> <add key=\"LogSeqBypassProxyOnLocal\" value…", "content_html": "
After the original post, I tackled another item I'd been meaning to look at - being able to configure proxy settings for the Serilog Seq sink. Lurgle.Logging v1.2.3 now includes additional optional configurations for the Seq sink's proxy. This is particularly useful for console apps like Seq Reporter, to ensure they don't attempt to use your proxy config when logging to Seq.
\nYou can configure the new settings in your app.config (as per below) or via the LoggingConfig()
constructor with Logging.SetConfig()
!
\n<!-- Optional Seq proxy settings -->
<add key=\"LogSeqUseProxy\" value=\"false\" />
<add key=\"LogSeqProxyServer\" value =\"\" />
<add key=\"LogSeqBypassProxyOnLocal\" value = \"false\" />
<add key=\"LogSeqProxyBypass\" value = \"\" />
<add key=\"LogSeqProxyUser\" value = \"\" />
<add key=\"LogSeqProxyPassword\" value = \"\" />
Original post on the destructure and masking goodies below!
\nIt seems like such a long time ago since I first released Lurgle.Logging, but in fact it was only last month! In the original post I said:
\n\n\nThis implementation does not currently destructure properties, but it's an enhancement to contemplate for future updates.
\n
Well, I contemplated it, and here it is.
\nThe purpose of destructuring is to ensure that on object passed which contains structured data - such as a class with its own properties - is reflected in your logging. If you pass a class to the log without destructuring, Serilog will return a ToString()
representation of the class type. If you destructure - you'll get the properties within that class.
The original logging class that became Lurgle.Logging didn't really account for destructuring since it wasn't needed for the implementation. When I created Lurgle, I was conscious that it was needed, but I parked it until now.
\nIt was, overall, quite simple to add the functionality. I needed to amend the AddProperty()
methods to include a bool (destructure
), and then it was just a matter of ensuring that the flag was passed on to Serilog, and that masking was also handled accordingly.
I inserted the new flag before the optional correlationId
, which means that the various static AddProperty()
methods have changed in implementation; for example:
\npublic static ILevel AddProperty(string name, object value,
string correlationId = null,
bool showMethod = false,
[CallerMemberName] string methodName = null, [CallerFilePath] string sourceFilePath = \"\",
[CallerLineNumber] int sourceLineNumber = 0)
is now:
\n\npublic static ILevel AddProperty(string name, object value, bool destructure = false,
string correlationId = null,
bool showMethod = false,
[CallerMemberName] string methodName = null, [CallerFilePath] string sourceFilePath = \"\",
[CallerLineNumber] int sourceLineNumber = 0)
and similarly, the fluent AddProperty()
methods have changed to implementations like;
public IAddProperty AddProperty(string name, object value, bool destructure = false)
\nI usually try to avoid breaking changes, but the static AddProperty methods are only a recent addition, and it's a relatively minor change.
\nThis functionality also applies to common properties - properties that are persisted through all log events until cleared. As a bonus - while adding the destructure flag to AddCommonProperty()
, I also made it possible to update common properties that already exist. So the AddCommonProperty()
methods have changed to implementations like:
public static void AddCommonProperty(string name, object value, bool destructure = false, bool update = false)
\nThe adddition of the update
flag means that if you already have a common property set, but need to update it for some reason, you can do so without having to clear the common properties and start again.
So one of the key benefits to destructuring is that we can apply masking to the destructured properties. There's an excellent implementation in the Masking.Serilog code, which was mostly workable but needed a little adjustment to avoid overlap with the Lurgle masking implementation.
\nI have therefore adapted the code to suit the Lurgle masking policy and configuration, and made it into an integrated destructurer, as shown by part of the LoggerConfiguration()
:
\nreturn config
.Destructure.WithMaskProperties()
The masking is controlled by your configuration for the LogMaskPolicy
; if my config is set to a masking policy, or if (as in my LurgleTest app) I were to enable masking using one of;
\nLogging.SetConfig(new LoggingConfig(Logging.Config, logMaskPolicy: MaskPolicy.MaskWithString));
Logging.SetConfig(new LoggingConfig(Logging.Config, logWriteInit: true, logMaskPolicy: MaskPolicy.MaskLettersAndNumbers));
Whether masking with a string, or masking letters or numbers - the result will be that any property within the LogMaskProperties
config that match within the destructured data will be masked.
In the above screenshot, the log message shows the deconstructed \"Test\" property that was passed to Serilog, and the Mechagodzilla
property that has been enabled for masking is correctly masked as per the LogMaskPolicy.MaskLettersAndNumbers
settings.
As usual, you can update Lurgle.Logging via Nuget, or use the oh-so-fancy links:
", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Structured logging", "Serilog", "Seq", "Masking", "Lurgle.Logging", "Lurgle", "C#", "Apps" ], "date_published": "2021-07-05T19:39:34-07:00", "date_modified": "2022-01-22T16:04:24-08:00" }, { "id": "https://mattmofdoom.com/seq-reporter-turn-your-structured-logs-into-scheduled-reports/", "url": "https://mattmofdoom.com/seq-reporter-turn-your-structured-logs-into-scheduled-reports/", "title": "Seq Reporter - Turn your structured logs into scheduled reports!", "summary": "Uhh ... You want what? So, you have all your apps logging to Seq, perhaps you have monitoring and alerting using apps like the Seq OpsGenie client, and maybe you're even using Event Timeout to detect events that didn't happen in time. Things are going great, except ... Well, management now want SLA reports to track on how things are going, and you have this great structured logging server that has most or even all of the data you need for that report. How are you going to get it out? You could perform queries against Seq and manually export…", "content_html": "
So, you have all your apps logging to Seq, perhaps you have monitoring and alerting using apps like the Seq OpsGenie client, and maybe you're even using Event Timeout to detect events that didn't happen in time. Things are going great, except ...
\nWell, management now want SLA reports to track on how things are going, and you have this great structured logging server that has most or even all of the data you need for that report. How are you going to get it out?
\nYou could perform queries against Seq and manually export the results to CSV or JSON using the inbuilt export functions. You could, perhaps, take screenshots of the pretty dashboard you've built, and use those. You could even plot your query data as a time series, bar chart, or pie chart and download the result as PNG file. All quite do-able - but manual.
\nYou could also get fancy and use seqcli to run a query and export to CSV. That has some potential for automation, and it formed the starting point for this effort.
\nI wanted to achieve, essentially, what seqcli does, but with some enhanced flexibility around the start and end times for the query, so that (for example) I could have a scheduled monthly report. I had a few other criteria in mind, too - so to build the basic list of requirements:
\nTo reproduce the seqcli functionality, I would need to get to grips with the Seq API. This wasn't too difficult - I've done plenty of work with Seq, and the API is well defined. There was even an example app that showed how to essentially reproduce the seqcli functionality, and the API has a built in method to return a string of your results in CSV format.
\nI initially tried simply adapting this code, but decided I wanted more control. If await connection.Data.QueryCsvAsync(query, rangeStartUtc, rangeEndUtc)
encounters an error, it will simply return the error in the string, which doesn't bode well for reliably detecting and alerting failures.
I therefore elected to parse the data returned from connection.Data.QueryAsync()
, which returns a QueryResultPart()
that has Columns
and Rows
properties for parsing, and importantly, also includes an Error
string.
That makes detecting an error trivial, and we can log that as an Error. Logging it as an error means I can watch for that error in Seq, and send it off to Jira or OpsGenie to ensure it's alerted.
\nTo parse the results, I elected to make use of an old favourite, CsvHelper, which makes it trivial to create CSV files. My favourite is being able to simply send a list of a given class to it and have an instant CSV file - but in this case, the Columns and Rows attributes didn't readily lend itself to that, so I resorted to writing individual columns as fields, with each row an individual record. Not a major drama.
\nWhich brings us to logging and alerting. I'd think my choices are obvious there - Lurgle.Logging and Lurgle.Alerting were brought to bear.
\nLurgle.Logging made it easy to develop a scheme that would both read from Seq using the API, and log to Seq using the underlying Serilog. To allow for this, my Seq API key is created with Ingest and Read permissions, so that just a single Seq config is needed. I'm in the habit of using the old logging patterns for Lurgle.Logging, but obviously could have made use of the new logging patterns that I recently enabled.
\nLurgle.Alerting gives me the ability to simply send the report, with a nicely formatted template that was easily changeable. The app does benefit from the recent addition to allow plain text alternates, and of course I make use of the enhancement to the send results to properly log a failure of the email.
\nThis all comes together nicely for a single purpose app ... but I did mention creating an easy way to setup multiple reports.
\nWe have an internal app that I developed for critical file transfers some years back. When I originally created it, it was for a single purpose file transfer ... but I built in a bunch of features that made it usable for multiple transfers. Fast forwarding a few years, that little app now supports a majority of our most critical SLA file transfers ... but it had a problem.
\nYou see, every time someone needed a scheduled file transfer, they'd deploy a new copy of the app with a new config. All the config was in the app.config (appname.exe.config) so this was necessary. We would have multiple versions of the app all over the place, with old versions not benefitting from bug fixes and new enhancements, such as additions to logging, which I touched on in the original Lurgle.Logging blog post.
\nSo I baked in an ability to specify an alternate config file, which meant that a single instance of the app could be used to perform any transfers. That helped to arrest proliferation of the app, with focus shifting instead to creating and maintaining different configs.
\nWith the addition of common logging and alerting libraries (in the form of Lurgle), I further enhanced this capability. No longer would the transfer config be contained in the app.config - only in the transfer config files. The app.config would now contain only the logging and alerting \"global\" settings.
\nOn top of that, I made it possible for each transfer to override logging and alerting settings, so (for example) a transfer could be set to use different email recipients, or log to a different folder, or use a different Seq API key.
\nWhen it came to Seq.Client.Reporter, then, it was obvious that to achieve multiple reports without proliferation of reporting versions - using this approach would be ideal. Hence - a single instance of Seq.Client.Reporter can be used with the command line:
\nseq.client.reporter.exe -config=<path to report config>
\nYour Seq Reporter instance needs configuration for logging and alerting, and this can be set in a single spot - Seq.Client.Reporter.exe.config. You can set your logs and alerts once, and only add them to your report configs when you need to override the \"global\" config.
\nThe log and alert settings have previously been covered off in the Lurgle blog posts - the configuration opportunities are extensive, and should be largely self-explanatory.
\n<appSettings>
<add key=\"EnableMethodNameProperty\" value=\"true\" />
<add key=\"EnableSourceFileProperty\" value=\"true\" />
<add key=\"EnableLineNumberProperty\" value=\"true\" />
<add key=\"AppName\" value=\"Seq.Client.Reporter\" />
<add key=\"LogType\" value=\"Console,File,Seq\" />
<add key=\"LogMaskProperties\" value=\"\" />
<add key=\"LogMaskPolicy\" value=\"None\" />
<add key=\"LogMaskPattern\" value=\"XXXXXX\" />
<add key=\"LogMaskCharacter\" value=\"X\" />
<add key=\"LogMaskDigit\" value=\"*\" />
<add key=\"LogConsoleTheme\" value=\"Literate\" />
<add key=\"LogFolder\" value=\"C:\\TEMP\\TEMP\\log\" />
<add key=\"LogName\" value=\"Reporter\" />
<add key=\"LogExtension\" value=\".log\" />
<add key=\"LogFileType\" value=\"Text\" />
<add key=\"LogDays\" value=\"31\" />
<add key=\"LogFlush\" value=\"5\" />
<add key=\"LogShared\" value=\"false\" />
<add key=\"LogBuffered\" value=\"true\" />
<add key=\"LogEventSource\" value=\"Reporter\" />
<add key=\"LogEventName\" value=\"Application\" />
<add key=\"logSeqServer\" value=\"https://seq.domain.com\" />
<add key=\"logSeqApiKey\" value=\"\" />
<add key=\"LogLevel\" value=\"Verbose\" />
<add key=\"LogLevelConsole\" value=\"Verbose\" />
<add key=\"LogLevelFile\" value=\"Information\" />
<add key=\"LogLevelEvent\" value=\"Warning\" />
<add key=\"LogLevelSeq\" value=\"Verbose\" />
<add key=\"LogFormatConsole\" value=\"{Message}{NewLine}\" />
<add key=\"LogFormatEvent\" value=\"({ThreadId}) {Message}{NewLine}{NewLine}{Exception}\" />
<add key=\"LogFormatFile\" value=\"{Timestamp:yyyy-MM-dd HH:mm:ss}: ({ThreadId}) [{Level}] {Message}{NewLine}\" />
<add key=\"MailRenderer\" value=\"Liquid\" />
<add key=\"MailSender\" value=\"MailKit\" />
<add key=\"MailTemplatePath\"
value=\"\" />
<add key=\"MailHost\" value=\"mail\" />
<add key=\"MailPort\" value=\"25\" />
<add key=\"MailTestTimeout\" value=\"3\" />
<add key=\"MailUseAuthentication\" value=\"false\" />
<add key=\"MailUsername\" value=\"\" />
<add key=\"MailPassword\" value=\"\" />
<add key=\"MailUseTls\" value=\"true\" />
<add key=\"MailTimeout\" value=\"60\" />
<add key=\"MailFrom\" value=\"bob@builder.com\" />
<add key=\"MailTo\" value=\"wendy@builder.com\" />
<add key=\"MailDebug\" value=\"scoop@builder.com\" />
<add key=\"MailSubject\" value=\"Alert!\" />
</appSettings>
So we come to configuring a report. The Test.config file included with the distribution includes comments to provide some guidance, but in short;
\n\n
\n<?xml version=\"1.0\" encoding=\"utf-8\"?>
<configuration>
<appSettings>
<add key=\"AppName\" value=\"Scheduled Transfer Report\" />
<add key=\"LogType\" value=\"Console,File,Seq\" />
<add key=\"LogFolder\" value=\"C:\\TEMP\\TEMP\\Log\" />
<add key=\"LogName\" value=\"Transfer\" />
<add key=\"LogFileType\" value=\"Json\" />
<add key=\"LogDays\" value=\"31\" />
<add key=\"LogFlush\" value=\"5\" />
<add key=\"LogShared\" value=\"false\" />
<add key=\"LogBuffered\" value=\"true\" />
<add key=\"LogLevelConsole\" value=\"Verbose\" />
<add key=\"LogLevelFile\" value=\"Information\" />
<add key=\"LogLevelEvent\" value=\"Warning\" />
<add key=\"LogLevelSeq\" value=\"Verbose\" />
<add key=\"MailFrom\" value=\"ScheduledTransfer.SEQ@domain.com\" />
<add key=\"MailTo\" value=\"Bob@Builder.com,Wendy@builder.com\" />
<add key=\"MailDebug\" value=\"Scoop@Builder.com\" />
<add key=\"IsDebug\" value=\"false\" />
<!-- Specify the valid Seq query you want to run. Multi-line is okay, but you must escape special characters per below-->
<!-- Ampersand & &
Less-than < <
Greater-than > >
Quotes " \"
Apostrophe ' '-->
<add key=\"Query\"
value=\"SELECT
Substring(ToIsoString(@Timestamp + OffsetIn('Australia/Sydney',@Timestamp)), 0, LastIndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), 'T')) AS Date,
Substring(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), IndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), 'T') + 1, LastIndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), '.') - IndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), 'T') - 1) AS XfrTime,
@Message,
log4net_HostName AS Server
FROM Stream LIMIT 10000\" />
<!-- Query timeout in minutes-->
<add key=\"QueryTimeout\" value=\"10\" />
<!-- Perform the query against one or more signals. Recommended since signals are indexed -->
<add key=\"Signal\" value=\"signal-503\" />
<!-- TimeFrom and TimeTo can be a Time, Date Expression, or Hybrid Expression-->
<!-- Date expressions: {Int}s|m|h|d|w|M, where s=seconds, m=minutes, h=hours, d=days, w=weeks, M=months-->
<!-- Sample date expression: 1M (1 month)-->
<!-- Hybrid expressions - date expression plus time - examples: 1M 4:00 or 1d 04:00:00-->
<add key=\"TimeFrom\" value=\"4:00\" />
<add key=\"TimeTo\" value=\"5:00\" />
<add key=\"UseProxy\" value=\"false\" />
<add key=\"ProxyServer\" value=\"\" />
<add key=\"BypassProxyOnLocal\" value=\"\" />
<add key=\"BypassList\" value=\"\" />
<add key=\"ProxyUser\" />
<add key=\"ProxyPassword\" />
</appSettings>
</configuration>
If you override logging settings, you need to be aware that some properties are grouped. Many Lurgle.Logging properties can be overridden on a per-report basis, but file logging overrides are enabled by specifying LogFolder, and any of the following properties should be specified if this is enabled, or they will revert to their defaults - this is most likely with LogName, LogExtension, and LogFileType.
\nConsole logging properties are grouped as well, but the defaults are likely what you'd use anyway.
\nWhile log masking is unlikely to be needed for this, the LogMaskPolicy, LogMaskPattern, LogMaskCharacter, and LogMaskDigit properties are similarly grouped.
\nFor alerting, you can freely override the MailFrom, MailTo, and MailDebug settings - however if you want to override the mail host, you'll need to specify any of the following properties if reverting to the default setting isn't desirable - this is most likely with MailRenderer.
\nThe Test.config has part of a query that I've used in production for a Seq Reporter config. I have an app which uses Log4net that I've retrofitted to log to Seq, and I have a scheduled report to extract some really useful data from the XML logging that this app sends. I haven't included some of the meatier parts of my query, but I did leave the logic that carves @Timestamp into a date and separate time column ... Seq functions can allow for some really funky capabilities in queries 😁
\nThe query as performed in Seq is:
\n\nSELECT
Substring(ToIsoString(@Timestamp + OffsetIn('Australia/Sydney',@Timestamp)), 0, LastIndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), 'T')) AS Date,
Substring(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), IndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), 'T') + 1, LastIndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), '.') - IndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), 'T') - 1) AS XfrTime,
@Message,
log4net_HostName AS Server
FROM Stream LIMIT 10000
which returns a nice little table with the date, time, message, and server name.
\nTo add that to my report config, I need to escape the apostrophes so that the Query value can be treated as a string:
\n\n<add key=\"Query\"
value=\"SELECT
Substring(ToIsoString(@Timestamp + OffsetIn('Australia/Sydney',@Timestamp)), 0, LastIndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), 'T')) AS Date,
Substring(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), IndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), 'T') + 1, LastIndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), '.') - IndexOf(ToIsoString(TimeOfDay(@Timestamp, DatePart(OffsetIn('Australia/Sydney',@Timestamp),'hour',0h))), 'T') - 1) AS XfrTime,
@Message,
log4net_HostName AS Server
FROM Stream LIMIT 10000\" />
This is a simple find and replace operation - search for ' and replace with '.
\nThe test.config includes a comment indicating which characters need to be escaped, but the table below also shows them:
\nName | \nCharacter | \nEscaped Character | \n
Ampersand | \n& | \n& | \n
Less-than | \n< | \n< | \n
Greater-than | \n> | \n> | \n
Quotes | \n\" | \n" | \n
Apostrophe | \n' | \n' | \n
Once you're aware of this - it's simple to get your query in place.
\nYou control the report time range with the TimeFrom and TimeTo configs.
\nYou can, of course, simply specify hours in the H:mm or H:mm:ss format, and this shows in the test.config file - a report on events from 4:00am to 5:00am every day.
\nBut to make it as powerful as possible, I made date expressions. I've used a very simple scheme for time expressions for the TimeFrom and TimeTo configs.
\nThese are expressed as the numeric value, followed by a character indicating the period.
\nPeriods available are:
\nYou can also specify \"now\" which simply uses the current date and time. This can't be used as a hybrid expression with a time, obviously, but it's a useful shorthand for, well, now - the time you run the report.
\nSo to simply specify the last hour in a time expression, you would put \"1h\". These are applied as past values, so \"1h\" means 1 hour ago.
\nIf you wanted a report for the past hour, you would specify:
\nBut perhaps we want to be even more specific, such as for a monthly report. I also made it possible to perform \"hybrid\" expressions - a time expression plus a time value in H:mm or H:mm:ss format.
\nTo perform a report that will be scheduled on the 1st of every month, you would specify:
\nwhich means that on the 1st July, we will report on all events matching the query between 1st June 12am to 1st July 12am - a full month.
\nIt is also possible to configure specific dates and times in TimeFrom and TimeTo - Seq.Client.Reporter will make a best effort attempt to parse them into meaningful time ranges.
\nThe use of a Liquid template for Seq Reporter means that you can readily customise the email to suit your purposes. The templates within the Templates folder can simply be edited to suit your desired text / format.
\n\n<!DOCTYPE html>
<html lang=\"en\" xmlns=\"http://www.w3.org/1999/xhtml\">
<head>
<meta charset=\"utf-8\"/>
<title>{{ReportName}} Report for {{Date}}</title>
<style type=\"text/css\">
p, td {
font-family: \"Calibri\", sans-serif;
font-size: 11.0pt;
}
</style>
</head>
<body>
<p>Please find the <span style=\"font-weight: bold\">{{ReportName}}</span> for <span style=\"font-weight: bold;\">{{Date}}</span> attached.</p>
<p>Report period: <span style=\"font-weight: bold\">{{From}}</span> to <span style=\"font-weight: bold\">{{To}}</span></p>
<p><span style=\"font-weight: bold\">{{RecordCount}}</span> records were returned.</p>
<p> </p>
</body>
</html>
\nPlease find the {{ReportName}} for {{Date}} attached.
Report period: {{From}} to {{To}}
{{RecordCount}} records were returned.
Seq Reporter is deliberately designed as a console app, which means you can add this to Scheduled Tasks in Windows - which I'm certainly doing. You do need to make sure you allow it to run whether or not a user is logged on, and save the password in order for it to be able to access the network.
\nYou could also run it as a schedule app from other services, such as scheduled SQL Server Integration Services agent jobs. I'm not using that in this case, but we certainly do use it with the scheduled transfer app that's also implemented as a console application.
\nIn short - if you have a way to schedule an app with parameters, Seq Reporter should work for the purpose.
\nI'm really pleased with how this app came together, and it makes a huge difference to be able to reliably get reporting out of Seq as an automated process. It benefitted a lot from prior work, and made it really easy to get my reports reliably logged and emailed.
\nA possible future enhancement might be to apply some of the code from the transfer app, to enable sending the file via SFTP.
\nAs always, it's available to download from the fancy links below!
", "author": { "name": "MattMofDoom" }, "tags": [ "Structured logging", "Seq", "Reports", "Reporter", "Lurgle.Logging", "Lurgle.Alerting", "Lurgle", "Liquid", "Email", "C#", "Apps" ], "date_published": "2021-07-03T20:28:54-07:00", "date_modified": "2022-01-22T16:09:20-08:00" }, { "id": "https://mattmofdoom.com/lurglealerting-v122-released-send-results-plain-text-and-html-improvements/", "url": "https://mattmofdoom.com/lurglealerting-v122-released-send-results-plain-text-and-html-improvements/", "title": "Lurgle.Alerting v1.2.2 released - Send results, plain text and HTML improvements", "summary": "Lurgle Update Time! I've released an update to Lurgle.Alerting, the premier Lurgle Alerting library for Lurgling your Alerts! This release is about updating some of the older code that was brought into the library: FluentEmail largely suppresses exceptions when there are send failures. Lurgle was simply returning the FluentEmail.Core.Models.SendResponse.Successful bool .. which is ok if your email sent. If you wanted to \"do\" something when Successful == false, though, you were out of luck. Hence I've adjusted Lurgle.Alerting to return the SendResponse, which means you have opportunity to examine the SendResponse.ErrorMessages property, which is an IList<string>. For myself, this is…", "content_html": "
I've released an update to Lurgle.Alerting, the premier Lurgle Alerting library for Lurgling your Alerts!
\nThis release is about updating some of the older code that was brought into the library:
\nSend
, SendAsync
, SendTemplate
, SendTemplateAsync
, SendTemplateFile
, and SendTemplateFileAsync
methods returned just a bool for success or failure.FluentEmail largely suppresses exceptions when there are send failures. Lurgle was simply returning the FluentEmail.Core.Models.SendResponse.Successful
bool .. which is ok if your email sent.
If you wanted to \"do\" something when Successful == false
, though, you were out of luck. Hence I've adjusted Lurgle.Alerting to return the SendResponse, which means you have opportunity to examine the SendResponse.ErrorMessages
property, which is an IList<string>
. For myself, this is an opportunity to pass the error messages into Lurgle.Logging that can then turn into an OpsGenie alert via Seq.
If you were already using the bool response in code, you would just need to adjust this from:
\n\nvar alert = Alert.To().Subject().Send(\"Test\");
if (!alert)
to:
\n\nvar alert = Alert.To().Subject().Send(\"Test\");
if (!alert.Successful)
As I was revising code, I decided that we weren't handling emails that were plain text, HTML, or HTML with alternate text, as well as we should.
\nIt's generally good practice to include both a HTML and plain text version of an email, and there's a number of RFCs covering this. Although \"most\" people can view HTML, you shouldn't necessarily assume that it's always the case. I felt existing functionality was somewhat limited here.
\nI've therefore baked plain text and HTML handling into Lurgle for all of the send methods.
\nSend()
and SendAsync()
now assume plain text - they really always did, as suchSendHtml()
and SendHtmlAsync()
are new methods that allow sending HTML with optional alternate (plain) textSendTemplate()
and SendTemplateAsync()
include optional parameter string alternateTemplate = null
that allows you to supply a plain text template for rendering as the alternate textSendTemplateFile()
, and SendTemplateFileAsync()
now include optional parameter bool alternateText = false
. If set to true, Lurgle will render the plain text version of your template (for example, alertTemplate.txt) as the alternate text, still allowing your selected renderer to act on the file!Some usage examples:
\n\nAlert.To().Subject(\"Test\").Send(\"Can you fix it?\");
Alert.To().Subject(\"Test HTML\")
.SendHtml(\"<html><body><p>Can you fix it?</p></body></html>\", \"Can you fix it?\");
Alert.To().Subject(\"Test Razor Template with alt text\").SendTemplateFile(\"Razor\", new { }, true, true);
For SendTemplateFile
, the last two parameters are isHtml = true
and alternateText = true
. This means that Lurgle will load the alertRazor.html and alertRazor.txt files when rendering the email, render them with RazorLight, and add the alternate text view to your email.
Both of these render with RazorLight, so you can do all your usual work in the text version - it's just not as \"pretty\". I've included samples of the two templates below, from the LurgleTest and Lurgle.Alerting.Tests projects.
\n\n@using Lurgle.Alerting
<!DOCTYPE html>
<html lang=\"en\" xmlns=\"http://www.w3.org/1999/xhtml\">
<head>
<meta charset=\"utf-8\"/>
<title>Razor Test</title>
<style type=\"text/css\">
p, td {
font-family: \"Calibri\", sans-serif;
font-size: 11.0pt;
}
</style>
</head>
<body>
<p style=\"font-size: 14pt; font-weight: bold;\">@Alerting.Config.AppName v@(Alerting.Config.AppVersion)</p>
<p>
<table style=\"border: 0;\">
<tr>
<td style=\"font-weight: bold;\">Renderer:</td>
<td>@Alerting.Config.MailRenderer</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Sender:</td>
<td>@Alerting.Config.MailSender</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Template Path:</td>
<td>@Alerting.Config.MailTemplatePath</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Mail Host:</td>
<td>@Alerting.Config.MailHost</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Mail Port:</td>
<td>@Alerting.Config.MailPort</td>
</tr>
<tr>
<tr>
<td style=\"font-weight: bold;\">Mail Test Timeout:</td>
<td>@(Alerting.Config.MailTestTimeout/1000)</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Use Authentication:</td>
<td>@Alerting.Config.MailUseAuthentication</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Username:</td>
<td>@Alerting.Config.MailUsername</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Password:</td>
<td>@Alerting.Config.MailPassword</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Use TLS:</td>
<td>@Alerting.Config.MailUseTls</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">SMTP Timeout:</td>
<td>@(Alerting.Config.MailTimeout/1000)</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Mail From:</td>
<td>@Alerting.Config.MailFrom</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Mail To:</td>
<td>@Alerting.Config.MailTo</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Mail Debug:</td>
<td>@Alerting.Config.MailDebug</td>
</tr>
<tr>
<td style=\"font-weight: bold;\">Mail Subject:</td>
<td>@Alerting.Config.MailSubject</td>
</tr>
</table>
</p>
</body>
</html>
\n@using Lurgle.Alerting
@Alerting.Config.AppName v@(Alerting.Config.AppVersion)
Renderer: @Alerting.Config.MailRenderer
Sender: @Alerting.Config.MailSender
Template Path: @Alerting.Config.MailTemplatePath
Mail Host: @Alerting.Config.MailHost
Mail Port: @Alerting.Config.MailPort
Mail Test Timeout: @(Alerting.Config.MailTestTimeout/1000)
Use Authentication: @Alerting.Config.MailUseAuthentication
Username: @Alerting.Config.MailUsername
Password: @Alerting.Config.MailPassword
Use TLS: @Alerting.Config.MailUseTls
SMTP Timeout: @(Alerting.Config.MailTimeout/1000)
Mail From: @Alerting.Config.MailFrom
Mail To: @Alerting.Config.MailTo
Mail Debug: @Alerting.Config.MailDebug
Mail Subject: @Alerting.Config.MailSubject
I also added several methods that allow retrieving a rendered email without sending. This is obviously useful for things like unit testing, but it also exposes the underlying IFluentEmail which means that you could do additional things with the email before sending.
\nThe unit tests make use of this somewhat, like below:
\n\nvar alert = Alert.To().Subject(\"Test Razor Template\").GetTemplateFile(\"Razor\", new { }, true, true);
Assert.True(alert.Data.IsHtml);
Assert.True(alert.Data.Body.Length > 0);
Assert.True(alert.Data.PlaintextAlternativeBody.Length > 0);
testOutputHelper.WriteLine(alert.Data.PlaintextAlternativeBody);
The normal way to update is via Nuget, but man, we rock some fancy links around here too!
", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Razor", "Lurgle.Alerting", "Lurgle", "Liquid", "Handlebars", "FluentEmail", "C#", "Apps" ], "date_published": "2021-07-02T23:08:00-07:00", "date_modified": "2022-01-22T15:40:28-08:00" }, { "id": "https://mattmofdoom.com/lurglealerting-v120-released-consistent-attachment-content-types/", "url": "https://mattmofdoom.com/lurglealerting-v120-released-consistent-attachment-content-types/", "title": "Lurgle.Alerting v1.2.0 released - Consistent attachment content types!", "summary": "I've released a small update to Lurgle.Alerting which adds automatic determination of the attachment content type using the MimeMapping library. I've raised the version to v1.2.0 to align with Lurgle.Logging's current releases. This specifically addresses an issue when sending attachments with MailKit as the SMTP sender. The FluentEmail implementation was essentially causing a System.ArgumentNullException exception in MimeKit.Utils.ParseUtils.ValidateArguments, because content type defaults to null. With the old SmtpClient, that was okay, but FluentEmail was explicitly trying to parse a null content type and receiving the error. The aforementioned MimeMapping project on Nuget had an extensive set of mime types and provided…", "content_html": "
I've released a small update to Lurgle.Alerting which adds automatic determination of the attachment content type using the MimeMapping library. I've raised the version to v1.2.0 to align with Lurgle.Logging's current releases.
\nThis specifically addresses an issue when sending attachments with MailKit as the SMTP sender. The FluentEmail implementation was essentially causing a System.ArgumentNullException
exception in MimeKit.Utils.ParseUtils.ValidateArguments
, because content type defaults to null. With the old SmtpClient, that was okay, but FluentEmail was explicitly trying to parse a null content type and receiving the error.
The aforementioned MimeMapping project on Nuget had an extensive set of mime types and provided a straightforward way to perform a 'best effort' match based on file name, with a sensible default of application/octet-stream. Perfect.
\nAs a result - Lurgle.Alerting will always supply a valid mime type to the underlying SMTP sender if you don't pass one. If you do pass a content type and it's invalid, we don't attempt to validate that at this point - so you will likely get an exception.
\nAs per normal, update from Nuget, or via the oh-so-fancy links below.
A while back, I had a requirement to migrate users from old Remote Desktop Session Hosts to a new Windows Server 2019 farm. This was a substantial uplift that needed a \"break\" from their old roaming profiles - especially since it would uplift users from very old applications to much newer ones.
\nOne of these old applications was Outlook 2010. This posed some challenge - for the most part, user profiles were \"disposable\" with the setup used; with folder redirection and a lot of Group Policy to control user configurations and settings, you can generally delete a roaming profile without the user noticing any difference. But Outlook profiles are different - they are stored in their roaming profile, and users in this case had multiple inboxes configured in those profiles. We wanted as seamless a migration as possible, so I had a few problems to consider:
\nThe result was Outlook Profiler!
\n\nUSAGE: OutlookProfiler Export2010={FilePath} [Options={OptionsFilePath}] [TargetProfile ={ProfileName}] [SourceProfile={ProfileName}] [TargetVersion=2013|2016] [Log={LogPath}] [IgnoreDefault]
OutlookProfiler Export2013 ={FilePath} [Options={OptionsFilePath}] [TargetProfile ={ProfileName}] [SourceProfile={ProfileName}] [TargetVersion=2013|2016] [Log={LogPath}] [IgnoreDefault]
OutlookProfiler Export2016={FilePath} [Options={OptionsFilePath}] [TargetProfile ={ProfileName}] [SourceProfile={ProfileName}] [TargetVersion=2013|2016] [Log={LogPath}] [IgnoreDefault]
OutlookProfiler Import={FilePath} [Options={OptionsFilePath}] [TargetProfile ={ProfileName}] [TargetVersion=2013|2016] [Log={LogPath}]
NOTE: For export operations, you can use IgnoreDefault as an optional parameter to force the SourceProfile to be used instead of the Default Profile.
Most of the challenge in Outlook profiles is in the different registry keys used. Before Outlook 2016, each version of Outlook used a different key to store your profile;
\n\n
Version | \nRegistry Key | \n
Outlook 2010 or earlier | \nHKEY_CURRENT_USER\\Software\\Microsoft\\Windows NT\\CurrentVersion\\Windows Messaging Subsystem\\Profiles | \n
Outlook 2013 | \nHKEY_CURRENT_USER\\Software\\Microsoft\\Office\\15.0\\Outlook | \n
Outlook 2016 and higher | \nHKEY_CURRENT_USER\\Software\\Microsoft\\Office\\16.0\\Outlook | \n
From Outlook 2016, it appears Microsoft consistently use the 16.0 key path.
\nWe may also need to look at another key: HKEY_CURRENT_USER\\Software\\Microsoft\\Exchange\\Client\\Options ... at minimum, this stores the option to \"Choose a profile at logon\", which you may wish to export to preserve the expected behaviour for a user. An example I encountered is that some users had so many mailboxes to access, they setup multiple Outlook profiles to switch between. I'm not judging 😀
\nThe export is designed to allow transformation during export. We are using Regis3 as a simple way to export the registry keys involved in Outlook profiles, so this provides an opportunity to replace text within the exported file.
\nThis means that you can (for example) export from an Outlook 2010 profile, specify a new profile name for the destination, and specify that you will be using Outlook 2016 or higher. As noted above, Outlook 2016 and higher appear to use the same registry key, so a single TargetVersion will cover 2016+.
\nWe export the whole Outlook profile key, which means that all Outlook profiles are retained and transformed to new versions, but there's also opportunity to transform the \"default\" or a specific profile to a new name.
\nWe have a mechanism to examine the user's selected default Outlook profile and carry that selection over, which can be overridden with the IgnoreDefault optional parameter.
\nFor export operations, we provide the Export2010, Export2013, and Export2016 parameters which indicate what version you're exporting from - it simply controls which registry key is selected for the export operation. Usage is simple: Export2013={Path to your export file}.
\nWe also provide the Options parameter, which will export the aforementioned Options key (if it exists) to a separate file. Again, usage is Options={Path to options file}
\nOutlook Profiler can rename a profile during export. This is useful when your destination will use a different profile name.
\nIf you don't specify the optional \"IgnoreDefault\" parameter, this will be applied to the user's default Outlook profile, even if \"SourceProfile\" is provided.
\nYou can specify the source Outlook profile name to use. However you should note that Outlook Profile will, by default, attempt to select the default profile if it can find it.
\nBy default, Outlook Profiler will ignore this unless the optional \"IgnoreDefault\" parameter is specified.
\nYou can specify the version of Outlook that you will be migrating to. Only 2013 and 2016 is provided; I didn't want to provide a backward migration to Outlook 2010.
\nAn optional text log file that can be used to see what happened for a given user's export. Usage is Log={Path to log file}
\nOptional parameter to specify not to use the default profile to determine the source profile.
\nNewer versions of Outlook have a somewhat annoying behaviour of always prompting to create an Outlook profile, even if one exists (because we imported it!). This posed some difficulty until it could be solved.
\nOutlook Profiler uses a registry setting called ImportPrf to stop Outlook from performing this prompt on first run. It's instructing Outlook to import a profile from file, rather than doing its default behaviour. For Outlook 2013 and especially Outlook 2016 and higher, the default is autodiscovery from AD, and we're subverting that behaviour to make the import work as intended.
\nThis is based on a Custom15.prf or Custom16.prf existing in your C:\\Program Files (x86)\\Microsoft Office folder. This file is typically generated by Office tools, but I've included a sample Custom16.prf file in the distribution that includes a couple of customisations, shown below, which help to support the first run behaviour.
\nIf you place this in the C:\\Program Files (x86)\\Microsoft Office folder, and name it as either Custom15.prf or Custom16.prf, this should help to mitigate the first run behaviour. You could copy this to machines using Group Policy Preferences.
\nDepending on your implementation, you may also need to stop Outlook from autoconfiguring the user mailbox from AD - info on this is available from various sites, such as this one.
\n\n
\n;Automatically generated PRF file from the Microsoft Office Customization and Installation Wizard
; **************************************************************
; Section 1 - Profile Defaults
; **************************************************************
[General]
Custom=1
;Set this to your target profile name
ProfileName=Outlook
;Set this to yes
DefaultProfile=Yes
;Don't overwrite
OverwriteProfile=Append
;Don't modify
ModifyDefaultProfileIfPresent=false
;Important - stops multiple profiles being created
BackupProfile=No
This is a relatively simple proposition. You're importing the exported registry key from a file to either Outlook 2013 or Outlook 2016+. We default to Outlook 2013 if not specified.
\nThere is no transformation in this step, but the default Outlook profile will be set as part of the import.
\nTo avoid overwriting and possibly corrupting or resetting an existing profile, we do not import Outlook profiles if they already exist.
\nRead the above important note about first run behaviour. You should test the behaviour of Outlook on import and modify it using the custom prf file and Group Policy.
\n\n
A simpler proposition than the export - simply specify the file that you exported using Import={Path to your export file}.
\nSpecify the path to the options file you exported using Options={Path to your options file}.
\nNote! It's possible the file doesn't exist since the Export will only export options if they exist. If so, OutlookProfiler will exit with an error - but this is the last step of the import and won't affect the profile that has already been imported.
\nIf you recall, we export the whole Outlook profile key. When this is specified with Export, it performs a transformation, but when specified with Import, it specifies the default Outlook profile name within the exported file.
\nThis will cause Outlook Profiler to set your default Outlook profile to this name if it exists.
\nThis tells Outlook Profiler where to look for existing profiles, and the default profile setting. This is necessary because Outlook 2013 and Outlook 2016 or higher use different registry keys.
\nIf not specified, Outlook Profiler will default to Outlook 2013
\nAn optional text log file that can be used to see what happened for a given user's export. Usage is Log={Path to log file}.
\nBeing a simple console application, Outlook Profiler can be implemented in a multitude of ways. For example, you could deploy this using the Group Policy Logon Scripts functionality - it works well. You can find this in Group Policy Editor, User Configuration\\Policies\\Windows Settings\\Scripts (Logon and Logoff).
\nRead the above important note about first run behaviour. You should test the behaviour of Outlook on import and modify it using the custom prf file and Group Policy.
\nIn this example, we've put a copy of OutlookProfiler under the domain controller's NETLOGON folder, and are configuring an export logon script in Group Policy.
\nThis means that every time the user logs in, their Outlook profile will be exported to the file. This is useful when you're preparing for a migration - it keeps the profile up to date with any changes, until you migrate the user over to the new environment.
\nLogon Properties | \nValue | \n
Script Name | \n\\\\domain.local\\NETLOGON\\OutlookProfiler\\OutlookProfiler.exe | \n
Script Parameters | \n\n Export2010=\\\\fileserver\\home\\%username%\\%username%.profile Options=\\\\fileserver\\home\\%username%\\%username%.options SourceProfile=CompanyName TargetProfile=Outlook TargetVersion=2016 log=\\\\fileserver\\home\\%username%\\%username%.export.log \n | \n
\n\n
On the destination servers, we'll use the same copy of OutlookProfiler to read in the exported profile. Because OutlookProfiler will only import if the profile doesn't already exist, this is a 'safe' operation - we will import the profile on first login only.
\nIt does provide an interesting side effect - you can delete the user's roaming profile and re-import that exported profile. Handy in some cases.
\nThis assumes you have tested Outlook behaviour on import, and adjusted using custom prf file and group policy (per the above important note).
\nLogon Properties | \nValue | \n
Script Name | \n\\\\domain.local\\NETLOGON\\OutlookProfiler\\OutlookProfiler.exe | \n
Script Parameters | \nImport=\\\\fileserver\\home\\%username%\\%username%.profile Options=\\\\fileserver\\home\\%username%\\%username%.options TargetProfile=Outlook TargetVersion=2016 log=\\\\fileserver\\home\\%username%\\%username%.import.log | \n
In the above example, we set a group policy in the old environment to run an Outlook Profiler export on login, and another group policy in the new environment to run an Outlook Profiler import on login, which will ensure the first time a user launches Outlook, their profiles will be there.
\nThe below screenshot shows a mock-up of how this works in practice, illustrating an export and import, although I used different settings (such as exporting from Outlook 2016 and importing to Outlook 2013, renaming the profile to Barry).
You can see that, because there was no client options found, the Options file export was not performed, and therefore the Options import errored out. Since this is after the profile import, and is the last operation, I haven't intercepted this exception - it doesn't matter to the success of the profile import.
\nThis is quite a simple application, but it made a huge difference to our migration. The vast majority of users migrated over with all of their Outlook profiles intact, retaining access to their various mailboxes.
\nOutlook behaviour makes it imperfect - we ultimately have to resort to a custom prf file to avoid the default first run behaviour - but if you test and modify your prf file and Group Policy settings, you can generally get this going.
\nI don't expect that there's a huge requirement for this app out there - it filled a very specific use case - but I've put it up in case someone can use it, or perhaps get ideas from it 😀
We use quite a number of Event Timeout instances in our Seq environment, to detect processes that have not completed in time. The nature of the Seq.App.EventTimeout implementation is one that relies on a timeout in seconds, and this can result in keeping track of quite a few different calculations.
\nIf, for example, you have a process that starts at 4:00am and should be finished by 8:00am at latest, you might implement an Event Timeout instance that starts at 4:00am, ends at 9:00am, and has a 14,400 second timeout (that is, 4 hours * 3600 seconds). The added hour for the end time allows for the timeout errors and suppression intervals.
\nWhat if, like us, you have a number of these configured and want to shift them around, such as pushing the timeout and end time back 15 minutes? Easy enough for one, but 10 or 20 might send you cross-eyed trying to keep track of each of them.
\nThis is fairly easily handled with a simple spreadsheet.
\n\nThis is fairly straightforward - enter your Start Time, old timeout target (in hours and minutes), and new timeout target (in hours and minutes). The spreadsheet then derives:
\n=(C3-B3) * 86400
)=(D3-B3)*86400
)=D3-B3
)=D3+(60/1440)
)with some conditional formatting applied so you can see which settings will change.
\nI found this helpful both for keeping track of my changes, and for documentation as well. You can see the new timeout settings below:
\n\nI've uploaded a sample spreadsheet to Github for convenience!
", "author": { "name": "MattMofDoom" }, "tags": [ "Seq", "Event Timeout", "Config", "Apps" ], "date_published": "2021-06-27T21:16:08-07:00", "date_modified": "2021-06-27T21:19:52-07:00" }, { "id": "https://mattmofdoom.com/lurglelogging-v121-more-logging-patterns-for-your-lurgle-convenience/", "url": "https://mattmofdoom.com/lurglelogging-v121-more-logging-patterns-for-your-lurgle-convenience/", "title": "Lurgle.Logging v1.2.1 - More logging patterns for your Lurgle convenience", "summary": "Lurgle approach compared to Serilog Following on from the v1.2.0 multi-threaded correlation release, I thought about whether we could further improve how we interface with Lurgle.Logging. The general approach was to maintain a static interface to logging that would allow us to capture key properties for logging, that would provide nicely structured logs across a variety of applications and implementations. A good goal to have for a common log library, but I hadn't baked much flexibility into this - hence my musing and contemplation. The approach was ultimately based on my original implementation of a Serilog logger, and it was reasonable…", "content_html": "
Following on from the v1.2.0 multi-threaded correlation release, I thought about whether we could further improve how we interface with Lurgle.Logging.
\nThe general approach was to maintain a static interface to logging that would allow us to capture key properties for logging, that would provide nicely structured logs across a variety of applications and implementations. A good goal to have for a common log library, but I hadn't baked much flexibility into this - hence my musing and contemplation. The approach was ultimately based on my original implementation of a Serilog logger, and it was reasonable - perhaps high time - to review.
\nUp until now, the approach has typically looked like:
\n\nLog.Level(LurgLevel.Error).Add(\"An Error\");
Log.Exception(ex).Add(\"An Exception - {Message}\", ex.Message);
Log.Add(\"A simple event entry\");
Log.Level().AddProperty(\"PropertyName\", \"PropertyValue\").Add(\"Message\");
Which is functional but perhaps somewhat limiting. If you're familiar with Serilog, you know that it allows for patterns like:
\n\nLog.Information(\"Test\");
Log.Error(ex, \"Test\");
Log.ForContext(\"PropertyName\", \"PropertyValue\").Write(LogLevel.Error, ex, \"Message\");
and so on.
\nThe advantage of Lurgle.Logging as an implementation of Serilog is that it automatically adds common properties like AppName, AppVersion, MethodName, SourceFile, and LineNumber, along with a number of properties from enrichers, such as ThreadId, MachineName, and MemoryUsage. It also allows you to automatically mask properties, and provides a correlation id implementation that is now rather flexible.
\nNOTE: After percolating on this for a while, I've made some changes to how the static methods work for these new logging patterns. You can still explicitly specify Log.Information(), but it will always be followed by a .Add - you can't combine the log template and arguments anymore, because it didn't really work as well as it could have. You can read about the changes, and see the updated examples, here.
\nI wanted to more closely approximate how Serilog log patterns work, but still retain these advantages ... so I've now exposed a number of new patterns;
\nLog.Add
- existing method, but now allows arguments using the named argument args:
Log.Information
Log.Verbose
Log.Debug
Log.Warning
Log.Error
Log.Fatal
along with Fluent implementations (for example: Log.Level().Fatal(\"Argh!\")
or Log.AddProperty(dictionary).Fatal(\"Argh!)
) and overloads that allow you to pass exceptions and log levels where appropriate.
For each of these static implementations, you can pass arguments, but because we have the correlation id parameter in addition to the capture of caller members, you will need to explicitly specify args:
as a named argument, eg.
Log.Add(\"Test {Args1} {Args2}\", args: \"Test\", \"Test2\");
We also add a static implementation of AddProperty, which allows you to start with adding properties, and includes overloads to pass exceptions and log levels.
\nLog.AddProperty(\"TestProperty\", \"TestValue\").Error(\"An Error adding {TestProperty}\");
The overall effect makes for a lot of flexibility, which allow you to decide how you want to use Lurgle. The additions to my LurgleTest app probably illustrate this best:
\n\nLog.Add(\"Simple information log\");
Log.Add(LurgLevel.Debug, \"Simple debug log\");
Log.Add(\"Log with {Properties:l}\", args: \"Properties\");
Log.Information(\"Information event\");
Log.Information(\"Information event with {Properties:l}\", args: \"Properties\");
Log.Verbose(\"Verbose event\");
Log.Verbose(\"Verbose event with {Properties:l}\", args: \"Properties\");
Log.Debug(\"Debug event\");
Log.Debug(\"Debug event with {Properties:l}\", args: \"Properties\");
Log.Warning(\"Warning event\");
Log.Warning(\"Warning event with {Properties:l}\", args: \"Properties\");
Log.Error(\"Error event\");
Log.Error(\"Error event with {Properties:l}\", args: \"Properties\");
Log.Fatal(\"Fatal event\");
Log.Fatal(\"Fatal event with {Properties:l}\", args: \"Properties\");
Log.AddProperty(\"Barry\", \"Barry\").Warning(\"Warning event with {Barry:l}\");
Log.Error(new ArgumentOutOfRangeException(nameof(test)), \"Exception: {Message:l}\", args: \"Error Message\");
Log.AddProperty(LurgLevel.Error, \"Barry\", \"Barry\").Add(\"Log an {Error:l}\", \"Error\");
Log.AddProperty(LurgLevel.Debug, \"Barry\", \"Barry\").Add(\"Just pass the log template with {Barry:l}\");
Log.AddProperty(new ArgumentOutOfRangeException(nameof(test)), \"Barry\", \"Barry\")
.Add(\"Pass an exception with {Barry:l}\");
Log.AddProperty(test).AddProperty(\"Barry\", \"Barry\").Add(
\"{Barry:l} wants to pass a dictionary that results in the TestDictKey property having {TestDictKey}\");
Log.Level().Warning(\"Override the event level and specify params like {Test:l}\", \"Test\");
But of course, we still also allow for the old patterns:
\n\nLog.Level().Add(\"Configured Logs: {LogCount}, Enabled Logs: {EnabledCount}\", Logging.Config.LogType.Count,
Logging.EnabledLogs.Count);
Log.Level().Add(\"Configured Log List:\");
foreach (var logType in Logging.Config.LogType) Log.Level().Add(\" - {LogType}\", logType);
The goal was to give a similar degree of flexibility as a standard Serilog implementation, while retaining the added features of Lurgle.Logging. I think this is overall achieved - needing to specify arguments as an explicitly named argument gave me pause for thought, but the benefit seems to outweight the inconvenience.
\nI also exposed Logging.SetCorrelationId to allow another way of managing the correlation id. Generally speaking, you can do this within the static log interface, such as the examples below of generating a new correlation id or passing your own.
\n\nLog.Error(ex, \"Oh no! An error! {Message}\", Logging.NewCorrelationId(), args: ex.Message);
Log.Error(ex, \"Oh no! Barry had an error! {Message)\", \"Barry\", args: ex.Message);
But this also provides opportunity to simply call:
\nLogging.SetCorrelationId(\"Barry\");
if or when you need it.
\nThis release is all about convenience, and I think that's readily achieved. If you've already implemented Lurgle, nothing should break, but you now have more flexibility in how you log with Lurgle!
\nYou can update to v1.2.1 via Nuget, and of course fancy links are a way of life around these parts:
", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Structured logging", "Serilog", "Seq", "Lurgle.Logging", "Lurgle", "Correlation", "C#", "Apps" ], "date_published": "2021-06-27T19:12:13-07:00", "date_modified": "2022-01-22T16:02:34-08:00" }, { "id": "https://mattmofdoom.com/lurglelogging-v120-multi-threaded-correlation-ids-are-now-a-thing/", "url": "https://mattmofdoom.com/lurglelogging-v120-multi-threaded-correlation-ids-are-now-a-thing/", "title": "Lurgle.Logging v1.2.0 - Multi-threaded correlation ids are now a thing", "summary": "Multi-threaded correlation ids were not a thing Following on from my work on Seq.Client.WindowsLogins and the subsequent realisation that EventLog's EntryWritten event handler is bad and should feel bad, I contemplated whether I could apply some of my efforts to solve another issue that had been bugging me. Lurgle.Logging was built to handle correlation ids - whether generating a correlation id, or passing one through from another app. The thing that bugged me, though, was that the correlation id was static in nature. By default, Lurgle.Logging would generate a new correlation id at initialisation, and then carry that until either…", "content_html": "
Following on from my work on Seq.Client.WindowsLogins and the subsequent realisation that EventLog's EntryWritten event handler is bad and should feel bad, I contemplated whether I could apply some of my efforts to solve another issue that had been bugging me.
\nLurgle.Logging was built to handle correlation ids - whether generating a correlation id, or passing one through from another app. The thing that bugged me, though, was that the correlation id was static in nature. By default, Lurgle.Logging would generate a new correlation id at initialisation, and then carry that until either your app closed or a new correlation id was generated.
\nYou could compensate for that by passing a new correlation id to Log.Level()
or using Logging.NewCorrelationId()
, but in practice, this meant that in a given thread, you had to keep track of your correlation id while you wanted it, and always pass it through to Lurgle.
This would essentially mean code that looked like:
\n\nvar corrId = Logging.NewCorrelationId();
Log.Level(correlationId: corrId).Add(\"Here is my log entry\");
Log.Level(LurgLevel.Error, corrId).Add(\"Oh no! An error!\");
Log.Level(correlationId: corrId).Add(\"Phew ... moment passed\");
and for multi-threaded apps, you had no choice. You simply had to do this, because Lurgle didn't adequately manage multiple threads.
\nWorking on Seq.Client.WindowsLogins had caused me to do some work with implementing caching, and it seemed readily apparent that we could apply this to Lurgle.Logging.
\nThe idea was that each thread that calls Log.Level()
could have its own correlation id. We need to therefore track each thread's correlation id, but threads can naturally end and so we don't want to track it forever.
So the fundamental requirements:
\nThe result is the Correlation Cache, which is exposed as Logging.Cache
. This cache initialises when logging is initialised, unless the cache has been disabled.
Lurgle.Logging will, by default, create a new correlation id (or store the correlation id that you pass) for each thread that calls Log.Level()
. This correlation id will persist for the configured amount of seconds - by default, 600 seconds (10 minutes) - after logging has been seen from that thread. This is a sliding interval, so while new log entries are being added from a given thread, the expiry interval will slide back.
This makes for a flexible multi-threaded correlation id scheme. Your multi-threaded application will consistently use the same correlation id for a thread, until you switch it or it's no longer needed.
\nConfiguring the cache is done via two new configuration entries:
\nConfiguration | \nDefault | \nDescription | \n
LoggingConfig.EnableCorrelationCache | \ntrue | \nDisabling this will revert Lurgle.Logging to the static correlationid behaviour | \n
LoggingConfig.CorrelationCacheExpiry | \n600 | \nDefaults to 10 minutes | \n
In short - to enable this, you don't need to do anything; it's on by default, but you can disable it if needed, and if 10 minutes isn't enough, you can increase it.
\nThe 10 minute default is pretty generous, but if you have threads that may log less frequently, you might want to increase this.
\nAnd, of course, these configs can be reflected in your App.Config:
\n\n<add key=\"EnableCorrelationCache\" value=\"true\"/>
<add key=\"CorrelationCacheExpiry\" value=\"600\"/>
Generally, Lurgle manages the cache itself, and you shouldn't need to do anything. I've implemented properties and methods for interacting with the cache, though, and they are as follows;
\nProperty/Method | \nDescription | \n
Logging.Cache.Count | \nThe current count of all thread id:correlation id pairs in the cache | \n
Logging.Cache.Add(int threadId, string correlationId) | \nAdd a thread id and correlation id to the cache | \n
Logging.Cache.Replace(int threadId, string correlationId) | \nReplace an existing thread id:correlation id pair in the cache, or add a new one | \n
Logging.Cache.Remove(int threadId) | \nRemove a given thread id and its correlation id from the cache | \n
Logging.Cache.Get(int threadId) | \nRetrieve the given thread id's correlation id from the cache | \n
Logging.Cache.Contains(int threadId) | \nReturns true if the given thread id is in the cache | \n
Logging.Cache.Clear() | \nClear all thread id:correlation id pairs from the cache | \n
If we revisit my example from earlier, we'll see a much simpler implementation for multi-threaded code;
\n\nLog.Level().Add(\"Here is my log entry\");
Log.Level(LurgLevel.Error).Add(\"Oh no! An error!\");
Log.Level().Add(\"Phew ... moment passed\");
Log.Level(correlationId: Logging.NewCorrelationId()).Add(\"After all that, I'd really like a different correlation id\");
Log.Level(LurgLevel.Debug).Add(\"CorrelationId is {CorrelationId}\");
You can see from the added lines that we've retained the ability to pass a correlation id - in this case, by generating a new one - but instead of updating the static Logging.CorrelationId
property, we now add or replace it in the cache, where it will stay while you're still using it.
We do retain an ability to use the static correlation id if LoggingConfig.EnableCorrelationCache
is set to false, but this really only suits single threaded applications, and there is no harm in using the correlation cache for that scenario.
As usual, Lurgle.Logging is available via Nuget, and these fancy looking links!
", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Structured logging", "Serilog", "Seq", "Lurgle.Logging", "Lurgle", "Correlation", "C#", "Apps" ], "date_published": "2021-06-26T22:38:34-07:00", "date_modified": "2022-01-22T16:01:56-08:00" }, { "id": "https://mattmofdoom.com/eventlogentrywritten-is-bad-and-should-feel-bad-the-v112-update-for-seq-client-for-windows-logins/", "url": "https://mattmofdoom.com/eventlogentrywritten-is-bad-and-should-feel-bad-the-v112-update-for-seq-client-for-windows-logins/", "title": "EventLog().EntryWritten is bad and should feel bad - the v1.1.2 update for Seq Client for Windows Logins", "summary": "Why aren't new logins showing up??? I mentioned in my previous post that the Windows event log can iterate through all log entries and send them to the EventLog().EntryWritten handler repeatedly. I was handling that with an ad-hoc implementation of a 'cache' that aimed to ensure we only examined recent events and de-duplicated. This behaviour was showing up on a Windows 2016 server with a fairly busy event log, but I was also wrong in my assumption that the log was large - a subsequent check indicated it was \"only\" ~40MB (subsequently expanded to ~80MB). The log was overwriting events…", "content_html": "
I mentioned in my previous post that the Windows event log can iterate through all log entries and send them to the EventLog().EntryWritten
handler repeatedly. I was handling that with an ad-hoc implementation of a 'cache' that aimed to ensure we only examined recent events and de-duplicated.
This behaviour was showing up on a Windows 2016 server with a fairly busy event log, but I was also wrong in my assumption that the log was large - a subsequent check indicated it was \"only\" ~40MB (subsequently expanded to ~80MB). The log was overwriting events as needed, and it appears that this is the main cause of the iteration.
\nOver time, though, I observed that Seq.Client.WindowsLogins 'stopped' detecting new logins. Initially I assumed this was either a problem with the cache (possibly unsafe thread handling or a deadlock), or with the period of new events - essentially, that the event handler was gradually taking longer and longer to send new events, so examining the past 10 minutes was inadequate for monitoring.
\nOnce I added a counter to observe the number of items in the cache, I could see what appeared to be a \"drain\" of the cache. Essentially - the cache would travel along and happily build a list of \"seen\" events ... and at some point, suddenly drop to 0 and never recover. This correlated to the behaviour of not detecting new logins, so I assumed that correlation is causation.
I went through quite a few iterations of possible solutions - locks, switching to a System.Runtime.Caching.MemoryCache
implementation, sliding cache and log window, async handling, and eliminating the log window altogether. I also tried adding a watchdog to detect if events hadn't been received and to therefore 'restart' the EventLog().EntryWritten
handler.
The biggest \"success\" I had was when I switched to using the Microsoft.Extensions.Caching.Memory.MemoryCache
implementation. Here, I encountered the exact opposite behaviour - I observed the cache \"stalling\" (hitting an upper count and then staying at that figure indefinitely). Again - no new logins being captured.
At this stage I felt I'd hit a wall, so I aimed to reset the assumptions I'd built as I worked on the problem, and start from first principles, by taking a step back and examining what I knew.
\nEventLog().EntryWritten
reprocesses/resends old events on a server with a busy security log that overwrites events as neededMemoryCache
is generally threadsafe.I may have missed some aspects of my analysis from the list, but it's a reasonable summary.
\nSo starting from the first item - why does EventLog().EntryWritten
do that? Is it a known behaviour? Have others encountered it?
The short answer is yes - but there wasn't a lot to go off. I certainly found instances of people observing that behaviour, but often with no solution, or the solution being simply references to the Microsoft documentation that didn't appear to help. The most concrete thing that I could find was that it appears to relate to old event record ids being removed from the list and therefore causing Windows to re-send all events to the handler.
\nMore searching could perhaps get me to a definitive statement on the behaviour - but having more context, I contemplated that perhaps I shouldn't be accepting this behaviour. Hence, some options;
\nEventLog().Entries
and looking at events that matched event id 4624 with recent TimeGenerated
properties. If I did this every 10 seconds, or perhaps every minute, then I could iterate through a list and raise matching properties.EventLogReader()
with a similar polling approach with a query that selects only event id 4624 with the Success Audit keyword.EventLogWatcher()
with a query that selects only event id 4624 with the Success Audit keyword, and use the EventLogWatcher().EventRecordWritten
handlerEach of these approaches would likely have an effect on the problem, but the EventLogWatcher()
option afforded the most \"similar\" solution to the existing implementation, and meant it was most likely to send new logins to Seq in the shortest time possible.
I'll second the opinion of a number of posts I read - there's not a lot of meat to the Microsoft documentation for EventLogWatcher()
. Nonetheless, I managed to work out an implementation and pivot the code over - it's not really difficult.
\n_eventLog = new EventLogQuery(\"Security\", PathType.LogName,
\"*[System[band(Keywords,9007199254740992) and (EventID=4624)]]\");
_watcher = new EventLogWatcher(_eventLog);
_watcher.EventRecordWritten += OnEntryWritten;
_watcher.Enabled = true;
Not too far different from the EventLog().EntryWritten
implementation, although it does give us the advantage that an EventRecord
is passed, rather than an EventLogEntry
that we then also need to implement an EventLogReader()
query to extract our properties from.
With that advantage in mind, the handler becomes relatively \"simple\". The code really only needs to extract the properties and log them. I opted to make the handler async, and send the HandleEventLogEntry()
worker off to a separate thread to avoid blocking new events; the method itself has nothing async to call upon, so I've just used a Task.Run() delegate. There's nothing we need to wait for in that worker; firing it off to a separate thread will result in only a few outcomes that will either be logged or update a counter. I don't think it's particularly expensive in CPU time, but I want to finish the handler's tasks as quickly as possible.
\nprivate static async void OnEntryWritten(object sender, EventRecordWrittenEventArgs args)
{
try
{
//Ensure that events are new and have not been seen already. This addresses a scenario where event logs can repeatedly pass events to the handler.
if (args.EventRecord != null && args.EventRecord.TimeCreated >= ServiceStart &&
!EventList.Contains(args.EventRecord.RecordId))
await Task.Run(() => HandleEventLogEntry(args.EventRecord));
else if (args.EventRecord != null && args.EventRecord.TimeCreated < ServiceStart)
_oldEvents++;
else if (args.EventRecord == null)
_emptyEvents++;
}
catch (Exception ex)
{
Log.Exception(ex).Add(\"Failed to handle an event log entry: {Message:l}\", ex.Message);
}
}
The result is ... so far so good!
I added new counters to the heartbeats in this release, which give me the ability to construct a Seq dashboard. These are:
\nBased on testing, the last 3 should always be 0. We don't seem to hit the event reprocessing problem with this implementation, so we don't have any \"old events\". The EventLogWatcher() appears to reliably send valid event records, so we don't have \"empty events\". And we always extract the properties from events sent to our worker, so there are no \"unhandled events\".
\nThe above dashboard screenshot shows measurements of two servers over several hours, since v1.1.2 was started on each server. In the top two graphs and the table;
\nThe bottom two graphs reflect the Event Timeout for Seq instances that are monitoring our heartbeats. I can use the \"Successfully matched\" log entries to show heartbeats over time, and I can watch for Error logs from each instance to reflect when a timeout has occurred - the service is stopped or server is down.
\nThe fact that we are never seeing old events means that the cache implementation appears to not strictly be necessary anymore - however I do like it for the purpose of health and statistics. It gives us a picture of overall volume in terms of activity, and it's only a cache of event record ids that is relatively memory efficient, so I'm leaving it in for now at least.
\nI'm well past the period in which these issues have been showing up at this stage, and while I'm still monitoring, this appears to be behaving exactly as expected - I'm happy with it.
\nYou can download Seq.Client.WindowsLogin v1.1.2 from the below fancy link (which is now fixed to point to the right repo)!
", "author": { "name": "MattMofDoom" }, "tags": [ "Windows Logins", "Updates", "Seq", "Lurgle.Logging", "Heartbeat", "EventLog", "Event Timeout", "C#", "Apps" ], "date_published": "2021-06-24T20:58:11-07:00", "date_modified": "2022-01-22T16:01:19-08:00" }, { "id": "https://mattmofdoom.com/detecting-logins-like-a-boss-the-seq-client-for-windows-logins/", "url": "https://mattmofdoom.com/detecting-logins-like-a-boss-the-seq-client-for-windows-logins/", "title": "Detecting logins like a boss- the Seq Client for Windows Logins", "summary": "The Journey Begins ... This was a journey that began with an existing, and really useful, Seq application. I've had some mileage in the past from the Seq.Client.EventLog service. I've used it to monitor the Windows Application event log for new logs from a specific source, send them to Seq, and alert using Event Timeout if the expected event doesn't happen at a configured time. It worked well for the purpose, so when a new requirement came up to monitor for successful interactive Windows logins and other efforts were falling short, I started looking at sending logs to Seq with…", "content_html": "
This was a journey that began with an existing, and really useful, Seq application.
\nI've had some mileage in the past from the Seq.Client.EventLog service. I've used it to monitor the Windows Application event log for new logs from a specific source, send them to Seq, and alert using Event Timeout if the expected event doesn't happen at a configured time.
\nIt worked well for the purpose, so when a new requirement came up to monitor for successful interactive Windows logins and other efforts were falling short, I started looking at sending logs to Seq with Seq.Client.EventLog.
\nTo pick up new logins from Windows, you'll typically monitor the Security event log for Success Audit events with the event id 4624. Easy enough to do with Seq.Client.EventLog, right? It has enough configuration elements to accomplish that, except - I did mention that we want interactive logons. That means we need to dig deeper into the event log event, to examine the \"Logon Type\" property, which you can see in the details below:
Sourced from Audit logon events (Windows 10) - Windows security, here are the possible logon types:
\nLogon type | \nLogon title | \nDescription | \n
---|---|---|
2 | \nInteractive | \nA user logged on to this computer. | \n
3 | \nNetwork | \nA user or computer logged on to this computer from the network. | \n
4 | \nBatch | \nBatch logon type is used by batch servers, where processes may be executing on behalf of a user without their direct intervention. | \n
5 | \nService | \nA service was started by the Service Control Manager. | \n
7 | \nUnlock | \nThis workstation was unlocked. | \n
8 | \nNetworkCleartext | \nA user logged on to this computer from the network. The user's password was passed to the authentication package in its unhashed form. The built-in authentication packages all hash credentials before sending them across the network. The credentials do not traverse the network in plaintext (also called cleartext). | \n
9 | \nNewCredentials | \nA caller cloned its current token and specified new credentials for outbound connections. The new logon session has the same local identity, but uses different credentials for other network connections. | \n
10 | \nRemoteInteractive | \nA user logged on to this computer remotely using Terminal Services or Remote Desktop. | \n
11 | \nCachedInteractive | \nA user logged on to this computer with network credentials that were stored locally on the computer. The domain controller was not contacted to verify the credentials. | \n
Speaking broadly - at least for an always-online server - the logon types of interest are 2 (Interactive) and 10 (Remote Interactive).
\nIn order to pick these up, we need to get up close and personal with the event log to extract the event log properties.
\nDoing this quickly turned the need to examine the logon type from a \"problem\" into an opportunity. To look at the properties for a standard event log (like an event id 4624), you can use an EventLogPropertySelector
to all properties of interest ... and we're sending to Seq, a structured logging server that loves properties. Do you see where this is leading? Of course you do!
\ntry
{
var query = new EventLogQuery(logName, PathType.LogName,
\"*[System[(EventRecordID=\" + entry.Index + \")]]\");
var reader = new EventLogReader(query);
for (var logEntry = reader.ReadEvent(); logEntry != null; logEntry = reader.ReadEvent())
{
//Get all the properties of interest for passing to Seq
var loginEventPropertySelector = new EventLogPropertySelector(new[]
{
\"Event/EventData/Data[@Name='SubjectUserSid']\",
\"Event/EventData/Data[@Name='SubjectUserName']\",
\"Event/EventData/Data[@Name='SubjectDomainName']\",
\"Event/EventData/Data[@Name='SubjectLogonId']\",
\"Event/EventData/Data[@Name='TargetUserSid']\",
\"Event/EventData/Data[@Name='TargetUserName']\",
\"Event/EventData/Data[@Name='TargetDomainName']\",
\"Event/EventData/Data[@Name='TargetLogonId']\",
\"Event/EventData/Data[@Name='LogonType']\",
\"Event/EventData/Data[@Name='LogonProcessName']\",
\"Event/EventData/Data[@Name='AuthenticationPackageName']\",
\"Event/EventData/Data[@Name='WorkstationName']\",
\"Event/EventData/Data[@Name='LogonGuid']\",
\"Event/EventData/Data[@Name='TransmittedServices']\",
\"Event/EventData/Data[@Name='LmPackageName']\",
\"Event/EventData/Data[@Name='KeyLength']\",
\"Event/EventData/Data[@Name='ProcessId']\",
\"Event/EventData/Data[@Name='ProcessName']\",
\"Event/EventData/Data[@Name='IpAddress']\",
\"Event/EventData/Data[@Name='IpPort']\",
\"Event/EventData/Data[@Name='ImpersonationLevel']\"
});
var eventProperties = ((EventLogRecord) logEntry).GetPropertyValues(loginEventPropertySelector);
}
}
catch (Exception)
{
}
If we can pull all those properties, we can make them properties within Seq structured logs, which means each property can be used for queries, signal, dashboards, alerts ... it's not hard to see the benefits! 😀
\nAround this point, I was still working with the Seq.Client.EventLog base, and looking at extending the SeqApi
and RawEvent
class to add the properties, but I started to find myself stripping back the code to essentials and then building new features and tackling challenges that seemingly only arise for this kind of targeted implementation.
To be fair, I think it's a result of a very large security log that overwrites events as needed, and has a high level of log activity. On a system with a relatively small security log and low log activity, this behaviour wasn't observed.
\nOther challenges and opportunities:
\nTo reduce the overhead and time to log new logins, we need to filter and de-duplicate event log entries, so that only recent logs are examined, duplicates are ignored, and we only use the EventLogPropertySelector
for the specific events we're watching.
To achieve this, I implemented logic to filter new EntryWritten
events to ensure they're no more than 10 minutes old and haven't already been seen. I also filter the event id (instance id) and event type - if it's not an event id 4624 of type SuccessAudit, it's of no interest.
To keep track of events, I added a class, TimedEventBag
, to which new event entry ids can be added with a 10 minute expiry time. It handles the expiry with an internal MemoryCache
, so we can simply add new events with an EventList.Add(entry.Index)
, and check if the list already contains that event with EventList.Contains(index)
.
With this in place, we now reliably log events to Seq in a short time after the login occurs, and we only log a given event log entry once.
\nOne added snag though - Windows can actually log two event id 4624 events; one has a Logon Guid (eg. {4fabda43-ce7b-9d1b-8a03-da0f930a775c}) and one doesn't (Logon Guid shows as {00000000-0000-0000-0000-000000000000}).
\nThe reason for this is that the populated Logon Guid is a correlation id with Kerberos events, while the empty Logon Guid is for \"everything else\". In a non-Kerberos environment, you'll only see the empty Logon Guid event. Both are essentially valid logon event entries, with the same properties except for this.
\nSince we're using this in a Kerberos environment and we only want to receive one log event for a login, so that we can reliably alert on a user login, I select only the event with a populated Logon Guid.
\nThe EventLogPropertySelector
gives us all the event properties associated with an event id 4624, so it's quite trivial to look at the logon type for types 2 and 10.
Logon type 10 is easy - we only get these if someone logs in via a remote connection, such as an RDP (Remote Desktop) connection.
\nLogon type 2 can be challenging though, because some non-interactive processes (eg. services) can log event id 4624 with logon type 2 for a process that they're launching. Examples that I've seen include SQL Server Reporting Services and SQL Service Integration Services. On examinination, I found that these \"non-interactive interactive processes\" will log certain properties with a \"-\", which doesn't occur with \"real\" interactive processes.
\nFor example, the \"IPAddress\" property will contain \"127.0.0.1\" for an interactive user, but \"-\" for these \"non-user\" events. That means all we need to do is examine one of these properties, and \"IPAddress\" will fit the bill nicely.
\nSo with this and the Kerberos Logon Guid filter in mind, our extracted EventLogPropertySelector
list for an event can now be pared down to only the events that we want to log to Seq;
\npublic static bool IsNotValid(IList<object> eventProperties)
{
//Only interactive users are of interest - logonType 2 and 10. Some non-interactive services can launch processes with logontype 2 but can be filtered.
return (uint) eventProperties[8] != 2 && (uint) eventProperties[8] != 10 ||
(string) eventProperties[18] == \"-\" ||
eventProperties[12].ToString() == \"00000000-0000-0000-0000-000000000000\";
}
So now the final piece - I mentioned a common logging library, and of course I'm referring to Lurgle.Logging. Using this provides us with automatically populated common properties like AppName and MachineName (among others), and allows consolidating the service and event log logging to a single implementation that means we can see everything happening in our Seq.Client.WindowsLogins instance.
\nThis means that our event log properties can readily be sent to Seq with a single Lurgle that will result in a Seq message that's nicely formatted for readability, with a bunch of extra properties to devour!
\n\nLog.Level(Extensions.MapLogLevel(entry.EntryType))
.AddProperty(\"EventId\", entry.EventID)
.AddProperty(\"InstanceId\", entry.InstanceId)
.AddProperty(\"EventTime\", entry.TimeGenerated)
.AddProperty(\"Source\", entry.Source)
.AddProperty(\"Category\", entry.CategoryNumber)
.AddProperty(\"EventLogName\", logName)
.AddProperty(\"EventRecordID\", entry.Index)
.AddProperty(\"Details\", entry.Message)
.AddProperty(\"SubjectUserSid\", eventProperties[0])
.AddProperty(\"SubjectUserName\", eventProperties[1])
.AddProperty(\"SubjectDomainName\", eventProperties[2])
.AddProperty(\"SubjectLogonId\", eventProperties[3])
.AddProperty(\"TargetUserSid\", eventProperties[4])
.AddProperty(\"TargetUserName\", eventProperties[5])
.AddProperty(\"TargetDomainName\", eventProperties[6])
.AddProperty(\"TargetLogonId\", eventProperties[7])
.AddProperty(\"LogonType\", eventProperties[8])
.AddProperty(\"LogonProcessName\", eventProperties[9])
.AddProperty(\"AuthenticationPackageName\", eventProperties[10])
.AddProperty(\"WorkstationName\", eventProperties[11])
.AddProperty(\"LogonGuid\", eventProperties[12])
.AddProperty(\"TransmittedServices\", eventProperties[13])
.AddProperty(\"LmPackageName\", eventProperties[14])
.AddProperty(\"KeyLength\", eventProperties[15])
.AddProperty(\"ProcessId\", eventProperties[16])
.AddProperty(\"ProcessName\", eventProperties[17])
.AddProperty(\"IpAddress\", eventProperties[18])
.AddProperty(\"IpPort\", eventProperties[19])
.AddProperty(\"ImpersonationLevel\", eventProperties[20])
.Add(
\"[{AppName:l}] New login detected on {MachineName:l} - {TargetDomainName:l}\\\\{TargetUserName:l} at {EventTime:F}\");
And, of course, we can send a simple heartbeat event every 10 minutes for handling with Event Timeout for Seq (and especially the latest release)!
\n\nprivate static void ServiceHeartbeat(object sender, EventArgs e)
{
Log.Level(LurgLevel.Debug)
.AddProperty(\"ItemCount\", EventList.Count)
.AddProperty(\"NextTime\", DateTime.Now.AddMilliseconds(Config.HeartbeatInterval))
.Add(
\"{AppName:l} Heartbeat [{MachineName:l}] - Cache of timed event ids is at {ItemCount} items, Next Heartbeat at {NextTime:H:mm:ss tt}\");
}
I've blanked out some properties from our production Seq, but here's a sample of the logging, including the heartbeats.
And here's some of the properties that are sent - we even include the original Windows event log text as a property, and you can see part of that at the top of the screenshot. Even with properties blanked out, I'm sure you get the idea.. this brings a lot of potential for monitoring and alerting!
Seq.Client.WindowsLogins is available for download from Github, with install instructions in the readme!
I've released a new update to Event Timeout for Seq, which improves the handling of 24 hour windows (eg. Start 00:00, End 00:00) and how repeat timeouts operate.
\nOrdinarily, Event Timeout is forward looking - it always calculates the next start time if the configured start time would fall in the past. That means that when you configure your event timeout at 2pm on Monday with a start time of 1pm, Event Timeout will determine that the next \"showtime\" (the interval between the start and end times where events are monitored) will be 1pm on Tuesday.
\nFor a timeout that is 24 hour, we need to handle this a little differently. We should always enter \"showtime\" even though the start time is in the past, because that is the expected behaviour - a showtime that is always active.
\nThis is most likely to be used with a repeating timeout, and in fact is the reason for this update. I have a requirement to monitor for a \"heartbeat\" - a repeating log event - and alert if the event is logged. This is the perfect use case for the \"Enable Repeat Timeout\" setting - if we set Start time and End time to 00:00, and enable repeating timeouts, we can detect any time the heartbeat stops and cause an OpsGenie alert to be raised.
\nThe config for this type of scenario is shown below. You can see that I have a couple of properties that I'm matching - the @Message property must contain \"heartbeat\" (it's case insensitive), and the MachineName property must contain \"myserver\".
\nProperty 1 is a \"special\" property which defaults to @Message if a name is not specified, so \"Property 1 name\" is left blank in this instance.
\n\n\n
I noticed that this wasn't quite working how I expected, due to a logic change at some point, so I have corrected that case. For a 24 hour window, we can now enter showtime immediately as expected.
\nI also noticed that the repeating interval code had been set to use the existing \"suppression interval\" to limit when it logs a match. That proved limiting - I wanted to stop alerts being raised for 15 minutes, but the heartbeat was occurring 10 minutes, and I always wanted to see when a positive match was made against a log event.
\nI've therefore added a \"Repeat timeout suppression\" configuration, which allows you to configure how long positive matches are suppressed. This is separate from \"suppression interval\" which controls how often error logs can be output.
\nThe below screenshot has had some details blanked out, but allow you to see the net effect - positive matches against my 10 minute heartbeat are logged, which is useful for diagnostic and monitoring purposes.
\n\nI've also added a minimum value to each of the intervals:
\nAnd finally, I've set up a number of unit tests in the project which in particular makes it easier to ensure that time calculations are behaving as expected. I've tied that in to a fancy CI/CD setup using Appveyor, so that testing and deployment of new releases to Nuget is automated!
\nAs usual, you can install Event Timeout for Seq using Seq.App.EventTimeout as the package id in Seq's \"Install from Nuget\" page.
\nYou can update existing instances by going to Event Timeout's Manage button in the Seq Apps screen, and clicking Update.
\n\n ", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Unit Tests", "Seq", "Heartbeat", "Event Timeout", "CI/CD", "C#", "Apps" ], "date_published": "2021-06-21T18:34:53-07:00", "date_modified": "2022-01-22T15:56:35-08:00" }, { "id": "https://mattmofdoom.com/passing-priority-responder-and-tags-from-seq-to-opsgenie/", "url": "https://mattmofdoom.com/passing-priority-responder-and-tags-from-seq-to-opsgenie/", "title": "Passing Priority, Responder, and Tags from Seq to OpsGenie!", "summary": "Building up the Seq app for OpsGenie Over the past few weeks, I've worked with Nicholas Blumhardt to enhance the Seq.App.OpsGenie application for Seq. Nicholas is the founder and CEO of Datalust, the company behind Seq, and is very active in the community - which is awesome, and has meant that there's a bunch of open source Seq apps created by him, which extend and enhance Seq's capabilities, including interfaces to other systems and platforms. He created the Opsgenie app, and we've been using it extensively to dramatically transform our monitoring and alerting landscape. With Nicholas' kind support, encouragement, and feedback,…", "content_html": "
Over the past few weeks, I've worked with Nicholas Blumhardt to enhance the Seq.App.OpsGenie application for Seq. Nicholas is the founder and CEO of Datalust, the company behind Seq, and is very active in the community - which is awesome, and has meant that there's a bunch of open source Seq apps created by him, which extend and enhance Seq's capabilities, including interfaces to other systems and platforms. He created the Opsgenie app, and we've been using it extensively to dramatically transform our monitoring and alerting landscape.
\nWith Nicholas' kind support, encouragement, and feedback, I added a number of enhancements to the app:
\nThese changes are now up and running on Nuget, and I've successfully tested the new features with OpsGenie.
\nMany of the changes most immediately benefit Event Timeout for Seq, and the recent v1.4.2 release was planned to take full advantage of them. As events panned out, I found a regression in the previous version that caused the AbstractAPI Public Holidays deserialization to fail when there was a public holiday to evaluate. I'd made a couple of mistakes while refactoring with Resharper which were readily corrected.
\nThat meant that Event Timeout 1.4.x was released earlier than expected as it was stable and the best candidate for release with the fix. It had only been awaiting a merge of the last pull for Seq.App.Opsgenie to be released.
\nWith the OpsGenie app now updated on Nuget, you can take advantage of the new features. In short - you can create a single OpsGenie instance to watch for Event Timeout alert events, with Event Timeout controlling the Priority, Responders, and Tags that will be sent to OpsGenie via properties that it logs!
\nIf, like us, you have different on-call support for the various components of your infrastructure, this is invaluable. You can target the timeout to the right responders, with the right priority, and the tags you need to pass to OpsGenie. All of this feeds straight into OpsGenie rules and policies to give you the power you need over your alerts. And of course - if you have Jira and use the OpsGenie Edge Connector script that I customised to pass OpsGenie priorities and tags through to Jira - your timeout priorities and tags will make it all the way to your Jira tickets!
\nWith Event Timeout, this is controlled with the following configuration properties:
\n\n\n\n
These configurations will be passed to Seq when a timeout alert is raised, which leaves them ready to be picked up by an Opsgenie app instance:
\n\n\n
So all we need is an OpsGenie instance configured to look for and map these properties!
\nHere's what that looks like:
\n\n\n
You can see the new configuration properties in the above screenshot:
\nIf you simply wanted to pass a static Priority, Responder, and Tags, that's done with the following configs:
\nThe magic comes when you use the other new properties.
\nWe use three properties to control the pass-through/dynamic property mappings.
\nThese three properties must be set for priority mapping to be performed. You're not constrained to just @Level - any valid property that is passed by an event can be used, including Event Timeout's Priority property!
\nWe use three properties to control pass-through/dynamic responder mappings.
\nThis was actually the first feature. Event Timeout already logged a Tags property with tags configured for a given timeout, and being able to pass those through to OpsGenie was the original reason that I started to work on the Seq.App.OpsGenie code.
\nIn short, you simply need to configure as follows:
\nThe pass-through/dynamic tag feature will append the tags that are passed from an event to any that you configured in the Alert Tags configuration. This means you can combine static and dynamic tags seamlessly!
\nResults, of course, are what really matter. Below is a screenshot from Seq of a test alert to OpsGenie. I had to blank out the Responder Mappings data as it had an email address that I didn't want to show - but without a means to show you an OpsGenie alert in action, this is the best illustration of the OpsGenie app passing the Event Timeout priority, responder, and tags through to OpsGenie, where our rules, policies, and escalations can take care of the rest. And, of course, the priorities and tags are translated all the way through to Jira, thanks to my OEC script!
\n\n\n
I think this is a cool addition to Seq and the OpsGenie app, and of course a major boost for Event Timeout.
\nI'd love to see other inputs and \"log output\" apps implement the Responder, Priority, and Tags to allow the OpsGenie app to pass them through - and of course more Seq apps that interface with other systems could also benefit. I may well take a pass at some of my favourite Seq apps with an eye to this.
\nI also have an enhancement suggestion open for Seq itself, to allow dashboard alerts to pass these properties. Nicholas seemed pretty positive about it, so fingers crossed!
\n", "author": { "name": "MattMofDoom" }, "tags": [ "Wheeeee", "Updates", "Structured logging", "Seq", "OpsGenie", "OEC", "Jira", "Event Timeout", "C#", "Apps" ], "date_published": "2021-06-18T00:15:22-07:00", "date_modified": "2022-01-22T15:57:14-08:00" }, { "id": "https://mattmofdoom.com/lurglealerting-v1110-and-lurglelogging-v1115-released/", "url": "https://mattmofdoom.com/lurglealerting-v1110-and-lurglelogging-v1115-released/", "title": "Lurgle.Alerting v1.1.10 and Lurgle.Logging v1.1.15 Released", "summary": "I've just pushed out an update to Lurgle.Alerting on Nuget. This release adds a Handlebars template option, based on the implementation by Matthew Turner at FluentEmail.Handlebars (github.com). When I came across the FluentEmail.Handlebars package, I was keen to use it, but it was only compiled against .NET Standard 2.1, and using some older versions of FluentEmail.Core and Handlebars.Net. Lurgle.Alerting targets support for .NET 4.6.1 as a minimum, and aims to keep dependencies updated. None of these are insurmountable and I initially looked at forking, updating, and sending a pull back to the project; however the implementation itself was quite simple…", "content_html": "
I've just pushed out an update to Lurgle.Alerting on Nuget. This release adds a Handlebars template option, based on the implementation by Matthew Turner at FluentEmail.Handlebars (github.com).
\nWhen I came across the FluentEmail.Handlebars package, I was keen to use it, but it was only compiled against .NET Standard 2.1, and using some older versions of FluentEmail.Core and Handlebars.Net. Lurgle.Alerting targets support for .NET 4.6.1 as a minimum, and aims to keep dependencies updated.
\nNone of these are insurmountable and I initially looked at forking, updating, and sending a pull back to the project; however the implementation itself was quite simple with only a couple of classes and a few methods, and so I decided to include the code in Lurgle.Alerting with acknowledgement and links back to the source repository.
\nIf I were substantively improving the code, of course, I'd have gone down the forking pull path. The goal is to provide multiple rendering options in Lurgle.Alerting, allowing a choice in implementation. More choice is better.
\nI've also recently updated Lurgle.Logging with a minor update - if LogFolder is not specified in the config, Lurgle.Logging provides a fallback config of using the same folder as your executable... or it was meant to. It was actually returning the full path including the executable ... so I fixed that.
\nThe Lurgle links below are so fancy that you can click them and get the latest version:
\nLurgle.Logging | \n|
Lurgle.Alerting | \n
", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Structured logging", "Serilog", "Seq", "Razor", "MailKit", "Lurgle.Logging", "Lurgle.Alerting", "Lurgle", "Handlebars", "Fluid", "FluentEmail", "C#", "Apps" ], "date_published": "2021-06-16T23:38:00-07:00", "date_modified": "2022-01-22T16:00:13-08:00" }, { "id": "https://mattmofdoom.com/event-timeout-for-seq-v142-released/", "url": "https://mattmofdoom.com/event-timeout-for-seq-v142-released/", "title": "Event Timeout for Seq v1.4.2 released", "summary": "A new release of Seq.App.EventTimeout is out. This was a little earlier than I planned to release v1.4.x, but there was a bug in the AbstractAPI deserialization as a result of some code refactoring which I'd missed. As usual, you can install Event Timeout for Seq using Seq.App.EventTimeout as the package id in Seq's \"Install from Nuget\" page. You can update existing instances by going to Event Timeout's Manage button in the Seq Apps screen, and clicking Update.", "content_html": "
A new release of Seq.App.EventTimeout is out.
\nThis was a little earlier than I planned to release v1.4.x, but there was a bug in the AbstractAPI deserialization as a result of some code refactoring which I'd missed.
\nAs usual, you can install Event Timeout for Seq using Seq.App.EventTimeout as the package id in Seq's \"Install from Nuget\" page.
\nYou can update existing instances by going to Event Timeout's Manage button in the Seq Apps screen, and clicking Update.
\n\n ", "author": { "name": "MattMofDoom" }, "tags": [ "Updates", "Seq", "Public Holidays", "Event Timeout", "C#", "Apps" ], "date_published": "2021-06-13T23:35:06-07:00", "date_modified": "2022-01-22T15:55:12-08:00" }, { "id": "https://mattmofdoom.com/lurglelogging-v1114-and-lurglealerting-v119-released/", "url": "https://mattmofdoom.com/lurglelogging-v1114-and-lurglealerting-v119-released/", "title": "Lurgle.Logging v1.1.14 and Lurgle.Alerting v1.1.9 Released", "summary": "I've pushed out updates to Lurgle.Logging and Lurgle.Alerting today. The Lurgle.Logging update is minor - I noticed that Log.Add wasn't correctly passing the calling method, source file, and line number. Lurgle.Alerting has received a more substantial update: This helps to make Lurgle.Alerting even more useful and reliable! You can get the updated Lurgles via the following fancy links:", "content_html": "
I've pushed out updates to Lurgle.Logging and Lurgle.Alerting today.
\nThe Lurgle.Logging update is minor - I noticed that Log.Add wasn't correctly passing the calling method, source file, and line number.
\nLurgle.Alerting has received a more substantial update:
\nAlerting.IsDebug
is a global boolean variable that is set via Alerting.SetDebug()
. When set, it will ensure that recipients are automatically replaced with the debug email address. If you have a \"debug mode\" that can be switched on or off for your app, just make sure you set Alerting.IsDebug(true)
for debug mode, configure MailDebug
, and let Lurgle make sure you don't accidentally leak data while testing!MailDebug
is a new setting for the debug email address. This affords opportunity to differentiate a debug email address from your default MailTo
address. To()
, Cc()
, or Bcc()
methods, it will automatically be parsed into the list of recipients.Alert.From()
and ReplyTo()
also handle comma-delimited strings, but since these can only contain one email address - the first valid email address will be selected.Alerting.Init()
method will automatically validate that the MailHost
, MailFrom
, MailTo
, MailDebug
, and MailSubject
config values have been configured.Alerting.Init()
failures are reported via the Alerting.AlertFailures
static list, for handling via your code.This helps to make Lurgle.Alerting even more useful and reliable!
\nYou can get the updated Lurgles via the following fancy links:
\nLurgle.Logging | \n|
Lurgle.Alerting | \n
\n
\n
\n
", "author": { "name": "MattMofDoom" }, "tags": [ "Wheeeee", "Updates", "Structured logging", "Serilog", "Seq", "Razor", "MailKit", "Lurgle.Logging", "Lurgle.Alerting", "Lurgle", "Liquid", "FluentEmail", "C#", "Apps" ], "date_published": "2021-06-13T22:59:42-07:00", "date_modified": "2022-01-22T15:45:32-08:00" }, { "id": "https://mattmofdoom.com/lurglealerting-a-standardised-fluentemail-implementation-with-extra-goodies/", "url": "https://mattmofdoom.com/lurglealerting-a-standardised-fluentemail-implementation-with-extra-goodies/", "title": "Lurgle.Alerting - a standardised FluentEmail implementation with extra goodies!", "summary": "Another Lurgle Around the time that I tackled my original Serilog logging implementation, I also looked at our email alerting. Emails can be used for a variety of reasons, and it's not uncommon that they are sent as a simple string that concatenates or formats variables. In this scenario, the emails are typically embedded in the source code and don't exactly lend themselves to easy updates (and aren't too pretty either). Enter FluentEmail with its templating capabilities. I liked the overall implementation, and the power of Razor templates made a massive difference to how we could approach alerting - and…", "content_html": "
Around the time that I tackled my original Serilog logging implementation, I also looked at our email alerting. Emails can be used for a variety of reasons, and it's not uncommon that they are sent as a simple string that concatenates or formats variables. In this scenario, the emails are typically embedded in the source code and don't exactly lend themselves to easy updates (and aren't too pretty either).
\nEnter FluentEmail with its templating capabilities. I liked the overall implementation, and the power of Razor templates made a massive difference to how we could approach alerting - and the app I was developing needed a lot of power here. Nowadays FluentEmail also offers Liquid templates using the Fluid project.
\nI've tended to treat logging and alerting as having an innate relationship, so when I created Lurgle.Logging and started switching apps over from their inbuilt logging classes to Lurgle.Logging, it seemed apparent that an opportunity existed to give the email code a similar treatment.
\nLike Serilog, there are a lot of ways to configure FluentEmail. It has quite a lot of options, and you can readily wire it up in any app. And like the logging scenario - email is oft-neglected, yet important, and subject to the same kind of challenges as you move through new applications and multiple developers. In fact, as FluentEmail and the RazorLight library that powers Razor templates had evolved, we had wound up on a much older version that would be challenging to get up to date \"in place\".
\nSo a common alerting library is useful and avoids the typical pitfalls. Again, like logging, it affords the prospect of both keeping my own app alerting implementations up to date if and when I add new features, as well as providing the ability for all developers in my workplace to use the same standardised implementation. First, some rules to encourage \"good\" implementation of alerting;
\nand so on.
\nThe result - Lurgle.Alerting
\nLurgle.Alerting is a standardised implementation of FluentEmail, which can help to avoid common pitfalls and challenges, with a few extra features. It doesn't replace FluentEmail, but it can help to get you up and running quickly.
\nIt implements several key FluentEmail components:
\nIt also implements;
\nOne of the most immediate usages for alerting is the ability to tell someone that \"something\" happened. Lurgle.Alerting aims to make this achievable in as simple a way as possible.
\nPassing:
\nAlert.To().Subject(\"Stuff happened\").Send(\"Send help!\");
or
\nAlert.From().Subject(\"Stuff happened\").Send(\"Send help!\");
will have the same effect - an email from the default From address will be sent to the default To address. That makes it simple to get an alert out to (for example) the team that maintains the app.
\nOf course, there's plenty of other ways that this could go down. For example, I might want to tell the business and cc the support team;
\nAlert.To(\"business@somecorp.com\").Cc().Subject\"Stuff happened\").Send(\"Send help!\");
will do the job.
\nMaybe we want a different From address?
\nAlert.From(\"myapp@somecorp.com\").To(\"business@somecorp.com\").Cc().Subject\"Stuff everywhere\").Send(\"Help please!);
Maybe if the app is in debug mode, we want to ensure it doesn't go out to the real address? (NOTE: This was subsequently updated to act globally, as a more rational implementation)
\nAlert.To(\"business@somecorp.com\", isDebug:true).Cc().Subject\"Stuff happened\").Send(\"Send help!\");
Maybe I want to reference a key in the app.config for the email address, and give them a formatted name rather than just the email?
\nAlert.To(\"BusinessUnitName\", \"Business Guys\", AddressType.FromConfig).Cc().Subject\"Stuff happened\").Send(\"Send help!\");
Pass a list or array of email addresses and attach a file, list of files, or stream?
\nAlert.To(myList).Cc().Subject\"Stuff happened\").Attach(myFile).Attach(myFileList).Attach(myStream).Send(\"Send help!\");
And of course, pass a template.
\n\nAlert.To().Subject(\"Test Liquid Template\").SendTemplateFile(\"Liquid\", new
{
Alerting.Config.AppName,
Alerting.Config.AppVersion,
MailRenderer = Alerting.Config.MailRenderer.ToString(),
MailSender = Alerting.Config.MailSender.ToString(),
Alerting.Config.MailTemplatePath,
Alerting.Config.MailHost,
MailTestTimeout = Alerting.Config.MailTestTimeout / 1000,
Alerting.Config.MailPort,
Alerting.Config.MailUseAuthentication,
Alerting.Config.MailUsername,
Alerting.Config.MailPassword,
Alerting.Config.MailUseTls,
MailTimeout = Alerting.Config.MailTimeout / 1000,
Alerting.Config.MailFrom,
Alerting.Config.MailTo
});
The above is from the LurgleTest app for Lurgle.Alerting, and passes the config properties to a Liquid template. Razor templates can typically access the namespaces available in your application, so the models can be simpler, but the above is also viable.
\nLurgle.Alerting also implements an ability to add inline attachments as a single call, which is convenient for embedding images to your email, using the AttachInline
method.
Like Lurgle.Logging, I implement an ability to read the config from the App.Config, but it's not mandatory. You can also call Alerting.SetConfig
and use the AlertConfig constructor to pass any property. As with Lurgle.Logging, there are defaults for a number of properties and you can pass in just what's necessary. As an example, in LurgleTest, I switch the config betweeen Razor and Liquid templates:
\nConsole.WriteLine(\"Send Razor template ...\");
Alerting.SetConfig(new AlertConfig(Alerting.Config, mailRenderer: RendererType.Razor));
Alert.To().Subject(\"Test Razor Template\").SendTemplateFile(\"Razor\", new { });
Console.WriteLine(\"Send Liquid template ...\");
Alerting.SetConfig(new AlertConfig(Alerting.Config, mailRenderer: RendererType.Liquid));
The prescriptive approach to the FluentEmail implementation means that Lurgle.Alerting compensates by exposing configurability:
\n\n<?xml version=\"1.0\" encoding=\"utf-8\"?>
<configuration>
<appSettings file=\"C:\\Users\\mattm\\source\\repos\\Lurgle.Alerting\\LurgleTest\\secrets.config\">
<add key=\"AppName\" value=\"Test\" />
<add key=\"MailRenderer\" value=\"Razor\" />
<add key=\"MailSender\" value=\"MailKit\" />
<add key=\"MailTemplatePath\" value=\"\" />
<add key=\"MailHost\" value=\"mail\" />
<add key=\"MailPort\" value=\"25\" />
<add key=\"MailTestTimeout\" value=\"3\" />
<add key=\"MailUseAuthentication\" value=\"false\" />
<add key=\"MailUsername\" value=\"\" />
<add key=\"MailPassword\" value=\"\" />
<add key=\"MailUseTls\" value=\"true\" />
<add key=\"MailTimeout\" value=\"60\" />
<add key=\"MailFrom\" value=\"bob@builder.com\" />
<add key=\"MailTo\" value=\"wendy@builder.com\" />
<add key=\"MailSubject\" value=\"Alert!\"/>
</appSettings>
</configuration>
Most of these properties are straightforward enough, but MailTestTimeout
bears mention as the property that controls the mail host connectivity test that is performed during initialisation. If it's set to 0, a test won't be performed. Anything higher will cause Lurgle.Alerting to make a TCP connection to the mail host during initialisation; if it fails, it will return an InitResult
so you can determine the reason that alerting doesn't occur.
AppName
, of course, is also implemented in Lurgle.Logging. It's intended that these are common and available to both libraries. You only need to configure them once, of course. AppVersion
is also available for use. These are both special properties that, if not configured, will be determined from the executing assembly.
MailTemplatePath
will also be determined from the executing assembly path if not specified. It will default to the Templates folder under this location.
Lurgle.Alerting is available from Nuget and the code is on Github. It is, overall, a simpler implementation than Lurgle.Logging, but it complements it quite well. You can readily configure either or both of them into your solution, and get going quickly with your alerting needs. You can always directly integrate FluentEmail to your code, but the Lurgle.Alerting implementation might still provide ideas. It does help to avoid a few pitfalls that I've seen with Razor templates, like the missing mshtml.dll error that can arise.
", "author": { "name": "MattMofDoom" }, "tags": [ "Razor", "MailKit", "Lurgle.Alerting", "Lurgle", "Liquid", "FluentEmail", "Email", "C#", "Apps" ], "date_published": "2021-06-13T00:05:35-07:00", "date_modified": "2022-01-22T16:08:04-08:00" }, { "id": "https://mattmofdoom.com/lurgle-logging-a-serilog-implementation-with-extras/", "url": "https://mattmofdoom.com/lurgle-logging-a-serilog-implementation-with-extras/", "title": "Lurgle.Logging - a standardised Serilog implementation with extra goodies!", "summary": "Logging is important Logging is a really important, oft-neglected, aspect of business applications. I can't state that enough. If you don't have good logging, you can't troubleshoot and debug problems, and you have little chance of seeing what's actually going on in your enterprise. In Structured Logging with Seq and Serilog, I gave an example of a Serilog implementation, which I updated for a couple of features and changes while writing the post. Unfortunately I didn't test those changes, so I freely acknowledge there were errors. When I first started with Serilog and Seq, I created an implementation that served the…", "content_html": "
Logging is a really important, oft-neglected, aspect of business applications. I can't state that enough. If you don't have good logging, you can't troubleshoot and debug problems, and you have little chance of seeing what's actually going on in your enterprise.
\nIn Structured Logging with Seq and Serilog, I gave an example of a Serilog implementation, which I updated for a couple of features and changes while writing the post. Unfortunately I didn't test those changes, so I freely acknowledge there were errors.
\nWhen I first started with Serilog and Seq, I created an implementation that served the purposes of the application that I was uplifting while also targeting a Seq POC - and it did give us a lot of benefits. A standardised implementation means that you don't have to think about logging unless you want to expand on features - you just focus on \"should I create a log for this?\" (in general - if you need to ask the question, the answer is probably yes).
\nThe problem arose when I moved on to the next application. I'd made a logging class I was happy with, so of course I carried it over - with a new copy of the class, updated to suit the application. And I did update it - I needed additional features over time to suit the application. Those new features didn't necessarily carry back to the original implementation. And then I moved to the next application, and ... well, you get the idea.
\nSo you can easily fall into a trap when it comes to logging implementation, even as a \"one man Dev team\". It gets worse when you bring other developers into the mix, because everyone has their own ideas and implementations.
\nFundamentally, if you want to drive a mandate of application logging to a logging server like Seq, and especially benefit from structured logging, you need some rules. For example-
\nand so on.
\nI mentioned in my Structured Logging post that I was creating a common log library. This affords the prospect of both keeping my own app logging implementations up to date if and when I add new features, as well as providing the ability for all developers in my workplace to use the same standardised implementation.
\nThe result is Lurgle.Logging.
\nLurgle is a nonsense word that I plucked out of thin air, because names are hard and not really that important in this case. It just needed a name.
\nSimply put - Lurgle.Logging is a standardised implementation of Serilog, with some extra features and capabilities. It implements Serilog and the Console, File, Event Log, and Seq sinks. It's not a replacement to Serilog, but it does provide a way to get up and running quite easily.
\nIn terms of sinks, people will typically implement those that they need and have access to. My biggest need is Seq logging, which of course I have access to, but I could add other sinks as or when needed.
\nTypically to get up and running with Serilog, you would have to install Serilog and any sinks that you want. You'd also install any enrichers desired, and you'd wire Serilog into your implementation in the way that you want. And then you'd do the same in your next app, and the next, and so on.
\nLurgle.Logging does away with that as a simple way to get a Serilog implementation up and running. It implements several sinks and enrichers:
\nIt also internally implements
\nThe overarching goal is that Lurgle logs are predictable with some good structured properties before you even contemplate what else you might want to send to the logs. Below is a sample of what Lurgle.Logging brings to the table, from Seq:
\n\nIf all your applications passed this level of detail, you might imagine how much easier it could be to troubleshoot and debug events. It also simplifies creating targeted signals and benefits alerts that you might want to send - for example to email, OpsGenie, Jira, or Service Now.
\nGenerally speaking, Serilog swallows errors that occur while logging, and that's overall true. That's generally a \"good\" thing because logging isn't bringing your application down. It's not strictly true for all cases though; it will generate exceptions during initialisation in certain cases, and for Seq - you might not know why you're not receiving events.
\nLurgle.Logging aims to resolve that by first checking the configured sinks. A great example is the Event Log sink; it can try to create an event source, and if your process has sufficient permissions, it will do so. If not, it will still try to generate logs with that source, and when it fails - exception. Similarly with a misconfigured file log.
\nSo if an exception occurs during initialisation, Lurgle swallows that - but stores a FailureReason in the static Logging.LogFailures
dictionary. The logs that were successfully initialised are visible as a LogType in the Logging.EnabledLogs
list.
This provides opportunity to review and handle logging problems in the code. If no logging is available, for example, you might send an alert.
\nIn my original implementation, the code always wrote an \"Initialising event sources\" event to any enabled log as part of the test. That's not strictly necessary, so I've made that an optional config that is disabled by default.
\nA lot of apps in our environment still use the App.Config approach to store configurations, especially for things like logging and alerts. I've implemented it in Lurgle.Logging, so if you specify the appropriate config in your app.config, it will happily read that in ... but I also haven't locked that in as a mandatory approach.
\nLurgle.Logging provides the Logging.SetConfig
method to configure your logging, with a constructor on the LogConfig class that allows you to pass any property. For example, in a console app I have a fallback that ensures logging to console will always be configured if no other config exists:
\nif (Logging.Config == null)
{
var logConfig = new LoggingConfig(appName: Common.AppName, appVersion: Common.AppVersion,
logType: new List<LogType> {LogType.Console}, logLevel: LurgLevel.Verbose,
logLevelConsole: LurgLevel.Verbose);
Logging.SetConfig(logConfig);
}
Lurgle.Logging provides defaults for many settings, so that you only have to supply the essential settings for the logging that you want.
\n\n
At its simplest, you can add Lurgle.Logging to a project and configure App.Config to your own needs.
\n\n
Lurgle.Logging has a prescriptive approach to Serilog implementation, so it compensates by exposing a lot of configurability. Here's the App.Config from the LurgleTest app that I use for adhoc tests;
\n\n
\n<?xml version=\"1.0\" encoding=\"utf-8\"?>
<configuration>
<appSettings file=\"C:\\Users\\mattm\\source\\repos\\Lurgle.Logging\\LurgleTest\\secrets.config\">
<!-- Automatically add the calling method name as a property-->
<add key=\"EnableMethodNameProperty\" value=\"true\" />
<!-- Automatically add the source file path as a property-->
<add key=\"EnableSourceFileProperty\" value=\"true\" />
<!-- Automatically add the line number as a property-->
<add key=\"EnableLineNumberProperty\" value=\"true\" />
<!-- Automatically write an \"Initialising\" event during Init -->
<add key=\"LogWriteInit\" value=\"false\"/>
<!-- Meaningful name that will be used as the app name for logging purposes -->
<add key=\"AppName\" value=\"LurgleTest\" />
<!-- logType is a comma separated list that can target Console, File, EventLog, and Seq -->
<add key=\"LogType\" value=\"Console,File,EventLog,Seq\" />
<!-- Properties that should automatically be masked -->
<add key=\"LogMaskProperties\" value=\"Password,Email,Mechagodzilla,Testcommonmask,testcommonmask2\" />
<!-- Define the applicable policy for masking - None, MaskWithString, MaskLettersAndNumbers -->
<add key=\"LogMaskPolicy\" value=\"MaskWithString\" />
<!-- Mask pattern to use when masking properties -->
<add key=\"LogMaskPattern\" value=\"XXXXXX\" />
<!-- Define the mask character to use for non-digit values in masking if MaskLettersAndNumbers is used -->
<add key=\"LogMaskCharacter\" value=\"X\" />
<!-- Define the mask character to use for digit values in masking if MaskLettersAndNumbers is used -->
<add key=\"LogMaskDigit\" value=\"*\" />
<!-- Theme for the console - Literate, Grayscale, Colored, AnsiLiterate, AnsiGrayscale, AnsiCode -->
<add key=\"LogConsoleTheme\" value=\"Literate\" />
<!-- Location for the file log -->
<add key=\"LogFolder\" value=\"C:\\TEMP\\TEMP\\log\" />
<!-- Prefix for the file log name, hyphen and date will be appended -->
<add key=\"LogName\" value=\"Lurgle\" />
<!-- Extension for the file log name, defaults to .log-->
<add key=\"LogExtension\" value=\".log\" />
<!-- For the Windows Event Log, the event source name-->
<add key=\"LogEventSource\" value=\"LurgleTest\" />
<!-- For the Windows Event Log, the destination log (eg. Application) -->
<add key=\"LogEventName\" value=\"Application\" />
<!-- Format for log files - Text or Json. Json will not use LogFormatFile to format messages -->
<add key=\"LogFileType\" value=\"Json\" />
<!-- LogDays controls how many days log files will be retained, default is 31 -->
<add key=\"LogDays\" value=\"31\" />
<!-- LogFlush controls how many seconds before log file writes are flushed to disk -->
<add key=\"LogFlush\" value=\"5\" />
<!-- Allow the log file to be shared by multiple processes. Cannot be enabled with LogBuffered = true -->
<add key=\"LogShared\" value=\"false\" />
<!-- Allow the log file to be buffered. Cannot be used with LogShared = true -->
<add key=\"LogBuffered\" value=\"false\" />
<!-- Minimum LogLevel that can be written - Verbose, Debug, Information, Warning, Error, Fatal-->
<add key=\"LogLevel\" value=\"Verbose\" />
<!-- Set minimum log level for the individual sink - Verbose, Debug, Information, Warning, Error, Fatal -->
<add key=\"LogLevelConsole\" value=\"Verbose\" />
<add key=\"LogLevelFile\" value=\"Verbose\" />
<add key=\"LogLevelEvent\" value=\"Warning\" />
<add key=\"LogLevelSeq\" value=\"Verbose\" />
<!-- Seq server URL, eg. https://seq.domain.com -->
<add key=\"LogSeqServer\" value=\"\" />
<!-- Seq API key - if blank, no API key will be used-->
<add key=\"LogSeqApiKey\" value=\"\" />
<!-- Log formats -->
<add key=\"LogFormatConsole\" value=\"{Message}{NewLine}\" />
<add key=\"LogFormatEvent\" value=\"({ThreadId}) {Message}{NewLine}{NewLine}{Exception}\" />
<add key=\"LogFormatFile\" value=\"{Timestamp:yyyy-MM-dd HH:mm:ss}: ({ThreadId}) [{Level}] {Message}{NewLine}\" />
</appSettings>
</configuration>
I mentioned that Lurgle.Logging has a masking enricher, and you can see this in action in the LurgleTest app, where I modify the config while running to switch to different policies.
\n\n//Add masked properties for test
Logging.Close();
Logging.SetConfig(new LoggingConfig(Logging.Config, logMaskPolicy: MaskPolicy.MaskWithString));
Logging.AddCommonProperty(\"TestCommonMask\", \"mask1234\");
Log.Level().AddProperty(\"Mechagodzilla\", \"Godzilla\").AddProperty(\"password\", \"godzilla\")
.Add(\"Testing masking properties, send complaints to {Email:l}\", \"mechagodzilla@monster.rargh\");
//Switch masked properties to use MaskPolicy.MaskLettersAndNumbers, allow init event to be logged
Logging.Close();
Logging.SetConfig(new LoggingConfig(Logging.Config, logWriteInit: true, logMaskPolicy: MaskPolicy.MaskLettersAndNumbers));
Logging.AddCommonProperty(\"TestCommonMask2\", \"mask1234\");
Logging.Init();
Log.Level().AddProperty(\"Mechagodzilla\", \"Godzilla123\").AddProperty(\"password\", \"godzilla123\").Add(
\"Testing masking properties, send complaints to {Email:l}\", \"mechagodzilla123@monster.rargh\");
The result is shown below.
\nMaskPolicy.MaskWithString
masks any matching property with the configured string, which in LurgleTest is XXXXXX.MaskPolicy.MaskLettersAndNumbers
masks any matching property by replacing characters with a configured character for letters and another for digits.\n\n
\n
This implementation does not currently destructure properties, but it's an enhancement to contemplate for future updates.
\nYou might note from the above example that there is a Logging.AddCommonProperty
method, and a SetProperty
method within the logging call.
AddCommonProperty is more or less equivalent to Serilog's .Enrich.WithProperty
implementation, but can be called at any time to add a property that will be added to all log events going forward. You can also clear the Common Properties with Logging.ResetCommonProperties
.
SetProperty
is directly equivalent to Serilog's ForContext
implementation. Wrapping an implementation around Serilog is all well and good, but missing out on the ability to add a property to any log event would be poor - so in implementing Lurgle.Logging, I made a Fluent implementation that would allow properties to be added at will.
I mentioned that there is a Correlation ID pass through and generation scheme. By default, Lurgle.Logging will generate a correlation id and carry it through all logging for persistence - but you don't have to do it that way.
\nAt any point you can specified a correlation id - either one that you have passed through from elsewhere, or by generating a new one. This is part of the Level
method that accompanies your typically Lurgle.Logging implementation. For example:
Log.Level(LurgLevel.Debug, Logging.NewCorrelationId()).Add(\"Enabled Log List (Switch CorrelationId):\");
This creates a new correlation id as part of the debug log, and will carry it forward from there. You could also ensure that the correct correlation Id is always passed by specifying it within Level
.
Log.Level(correlationId: corrId).Add(\"Stuff happened.\");
You can't specify it for the simple Log.Add
implementation that simply logs an information event, but a call to Logging.SetCorrelationId
or even Logging.NewCorrelationId
will allow for that.
One of the reasons for the existence of the Level
method in the Log
calls is to allow us to always catch the method that is calling the log. In Lurgle.Logging, I also added the source file path and line number. All of these can make debugging and troubleshooting much simpler - it's context that is invaluable to your logging.
Level allows you to specify the LurgLevel
(log level), correlationid, and optionally set Lurgle.Logging to prepend the calling method to your log entries. The default log level is LurgLevel.Information, so if you want to log an informational event, you can simply pass:
Log.Level().Add(\"Stuff happened - {Stuff}\", stuff);
or because it is a simple Information event, you can bypass Level and just use the simple Log.Add
call which is a static implementation of an Information log that also captures the method, source file path, and line number.
Log.Add(\"Stuff happened - {Stuff}\", stuff);
Of course, there's also an implementation for passing exceptions:
\nLog.Exception(ex).Add(\"Bad stuff happened with {Stuff} - {badstuff}\", stuff, ex.Message);
You might appreciate that there's an awful lot to Lurgle.Logging, and many ways that you could implement it, but I wanted to close off by sharing a practice that I've made standard for any console app that I create. I don't use Console.WriteLine
- I write logs, and configure the Literate console theme for Serilog.
This means that I can simultaneously output to logging while writing to the console, and I can see what a given log property is at a glance. For example the LurgleTest output:
\n\n
Much better than the typical gray text, and I'm simultaneously capturing those logs with all the extra structured properties to Seq and a Json log file.
Lurgle.Logging is available from Nuget and the code is on Github. I've been updating and improving it as I go. It's not necessarily a must have, but it might provide some with a straightforward way to get logging into your applications, and perhaps even benefit from the extra goodies. Obviously you could still simply wire up your own Serilog implementation - perhaps some ideas from Lurgle.Logging could be of use.
When we investigated OpsGenie, one feature I was attracted to was Heartbeat Monitoring. This is a feature that can help to answer a fundamental problem - \"How do you know if you have a major site or infrastructure outage?\"
\nThere are plenty of ways that you could go about this, especially if you have multiple sites with connections that could raise an alert if another site went down, but this is a relatively simple solution, if you have something to send a heartbeat to OpsGenie.
\nAs we were implementing OpsGenie as the nerve centre of our IT operations monitoring and alerting, and Seq was a major part of that design, I contemplated Seq as one of the options.
\nSeq as a structured logging server isn't necessarily the system that you would consider to perform an outbound heartbeat call - for starters, there was no functionality or apps that would do it. Secondly, is this really logging related?
\nI considered this and came up with the following conclusions;
\nThis was actually the first Seq app that I attempted, and I wanted to accomplish it quickly. Hence I initially forked Seq.Input.Healthcheck, which already had functionality to send a HTTP GET to a URL, and with some tweaks, adapted it into a heartbeat app, Seq.Input.OpsGenieHeartbeat. It was certainly quick to get up and running, and it worked well. It wasn't long before a major internet outage showed the value of the heartbeat.
\nI recently reviewed the app with an eye to adding proxy functionality, for sites that don't or can't have their server directly accessing the internet. I could do this with the Healthcheck fork, but I'd ultimately adapted an input designed to generically process one or more URLs and log statistics, into an app that performed a single function. The code that instantiates multiple tasks for each URL is well designed for it's original purpose, with multiple classes for performing and reporting the health check.
\nAltering the code for a heartbeat meant I was creating somewhat of a Frankenstein's monster. In fact, OpsGenie's normal status code (HTTP 202) was logged as a Warning event. I could correct that, but I was really changing an app from its intended design, for what should be a simple application for a single purpose - just a timer-based check would do the job.
\nSo I decided to simplify. I took the fundamental design of the heartbeat app, and re-implemented it as Seq.App.OpsGenieHeartbeat:
\nAlong the way, I added a few diagnostic logs for startup, and set the status codes according to the OpsGenie response - HTTP 202 gets a Debug event log, anything else gets a Warning, and exceptions get an Error.
\nOne interesting side effect of this is that the typical elapsed time for a heartbeat dropped from 150 - 200ms with the Healthcheck fork to 0.2 - 0.5ms with the new app - variable, of course, depending on the many factors affecting internet speeds.
\nNeither result is particularly terrible, but it's quite a noticeable difference. The new Heartbeat app doesn't need to inspect the returned content and output stats, it only needs the status code to determine if a heartbeat was successful. I suspect that Healthcheck also instantiates an HttpClient each time (I haven't checked); as usual, I'm using Flurl.Http, with a cheerful little implementation that configures a HttpClient that is always used - so the first call typically takes ~200ms or so, and then subsequent calls drop to the 0.2 - 0.5ms range.
\n\nThe result - an app that does everything needed for an OpsGenie Heartbeat, and nothing else. It outputs meaningful Seq logs for each heartbeat, including an AppName property (which I've tended to standardise on for all logs sent to Seq). These logs in turn could be monitored or alerted. And ultimately, the desired effect is achieved: if your site, or indeed your Seq server, drops off the face of the earth, your OpsGenie instance can make certain you know at 2am in the morning.
\nYou can install Seq.App.OpsGenieHeartbeat to your Seq installation by specifying the package id.
\n\n ", "author": { "name": "MattMofDoom" }, "tags": [ "Structured logging", "Seq", "OpsGenie", "Heartbeat", "C#", "Apps" ], "date_published": "2021-06-09T16:05:42-07:00", "date_modified": "2022-01-22T15:54:46-08:00" }, { "id": "https://mattmofdoom.com/event-timeout-super-powered-event-monitoring-for-seq/", "url": "https://mattmofdoom.com/event-timeout-super-powered-event-monitoring-for-seq/", "title": "Event Timeout - A super powered event monitoring app for Seq", "summary": "\"Something hasn't happened!\" My workplace has quite a number of disparate applications and scripts that drive critical SLAs. Historically, these were managed by exception and emailing errors to various mailboxes. This is a fairly poor approach to managing SLAs, since it is reliant on a human factor - someone has to see the email, understand its context, and action it. If I send an email \"SFTP Transfer failed\" and don't provide enough information in the email for someone to consistently recognise what the service is, that's going to lead to confusion and a likely SLA breach. Similarly, while a team…", "content_html": "
My workplace has quite a number of disparate applications and scripts that drive critical SLAs. Historically, these were managed by exception and emailing errors to various mailboxes. This is a fairly poor approach to managing SLAs, since it is reliant on a human factor - someone has to see the email, understand its context, and action it.
\nIf I send an email \"SFTP Transfer failed\" and don't provide enough information in the email for someone to consistently recognise what the service is, that's going to lead to confusion and a likely SLA breach. Similarly, while a team member who's been with the organisation might know the context, what if a new person picks up that email? What if that long-standing team member leaves and there's no-one to explain what it means? Did they document it? Prediction - we will probably get an SLA breach.
\nSo it's been my task, and privilege, to design, implement, and drive adoption of a standard monitoring and alerting infrastructure. Seq and OpsGenie are central pieces of this infrastructure, allowing us to fully automate our critical monitoring and alerting process.
\nTo accomplish this, we have had to mandate and drive Seq as our application logging server. I'm a big fan of Seq as a cost effective solution for structured logging, and the more application logs we integrate into it, the better we can monitor, troubleshoot, and debug problems.
\nThere is a lot to consider with application logging, but for the purposes of this post, let's consider - how do we detect if an event has not happened in time?
\nAt its core, an application logging server is only as good as the logs it receives. It doesn't \"know\" anything inherently about what you're sending to it. It simply ingests your logs. Seq has some outstanding capabilities that certainly can help with this - for example, I could (and do) set a dashboard alert that can detect that no logs have been received in the past 15 minutes, and alert that there may be an outage. That's really useful ... but what if I need to look for a specific log event, and alert when I don't see it in time?
\nThe answer is that we need to output a log event that we can react to, and we can do that with a Seq app.
\nSeq has a fantastic ability to add Seq apps written by Datalust and the Seq community. Generally speaking, the approach lends itself to open source extension of Seq capabilities. Apps are installed via Nuget, and you can use your own private Nuget server for your own apps - but of course, making them available as a public Nuget feed benefits the community by making your enhancements available.
\nThe Seq app approach is robust and well considered - you install an app, and can then configure as many instances of that app as you need, to meet your various purposes. For example, we make heavy use of the Json Archive app to create long term archives of various signals. The apps themselves use a small amount of RAM within the Seq instance.
\nIn terms of my problem - there certainly were apps for timeouts such as Seq.App.EventTimeout and Seq.App.DeadMansSwitch. These are \"tick/tock\" handlers, which essentially arm and disarm based on incoming events. They don't get very specific about which events and when. However, they do output events to Seq when a timeout occurs - that, at least, is what we needed to do.
\nI had a clear set of requirements in my head for a timeout app.
\nAll of this could be readily implemented with the Seq.Apps API, and I started my journey in creating a Seq app from scratch.
\nAlong the way, I added some bonus goals as a result of further analysis and testing; teasing out additional requirements that enhanced our capabilities.
\nThe result is an app that I imaginatively called Event Timeout ... because that's what it does. You can install it in your Seq instance by specifying the package id, Seq.App.EventTimeout.
\nIt is a monster of an app that now underscores a majority of critical SLAs. If a process doesn't execute in the specified time, it raises an Error log event. We create signals around that, based on the well-defined properties such as AppName, and which apps such as Seq.App.OpsGenie can monitor and alert on. Some of the features that I added to Event Timeout drove me to add enhancements to this app as well, which made our alerting picture even more comprehensive.
\nWhen I say it's a monster of an app - it really is. I wouldn't be surprised to find that it has the most configuration items of any Seq app. In part, this is because of the multiple property match - Seq.Apps doesn't allow configuration of a Dictionary, which would be a useful way to express a config like this - simply provide a dynamic table with key and value expressions. I'd certainly put that as a \"nice to have\".
\nI've put example configs in the Event Timeout repository's readme, but below is what a config looks like.
\nThis config would look for events occurring between 10am and 2pm on a Sunday, if the day of month is the 6th, and output an error every 60 seconds if an event matching:
\nThis is really specific, which makes the likelihood of a false positive very low. And that's the power of Event Timeout. I can use the properties from my apps that send to Seq, or an input like Seq.Input.MSSql that exposes multiple columns as properties, to make a positive match in a given timeframe, or raise a timeout alert.
\n\n\n
Event Timeout is forward looking and uses UTC time internally to calculate the \"next\" start event. So while writing this, I created the above instance and set the start time to 1 minute in the future - making sure to enable diagnostic logging so I could show you the 'magic' behind the scenes.
\n\n\n
In short - we wind up with a usable Error. I've expanded the error event to show the properties - you can readily create a signal on the AppInstanceId or AppName, or the AppId if you want all instances. From there - it's just up to an app that monitors signals and sends it somewhere useful. We've used OpsGenie, Jira, Email+, and Teams alerts for various reasons.
\nIt works. It gets people out of bed when they need to attend to a problem ... and the specificity of the configurations means that the alerts are always correct.
\nYou might note that I didn't configure the test instance with the Holidays API, because I didn't need it for that. Below is a sample of a configuration for the API that we actually use.
\n\n\n
This is really powerful when a process doesn't run on public holidays - we do have a few - or when an alert simply doesn't need to be raised on those days.
\nUsing AbstractAPI Holidays makes it easy. I use Flurl.Http for retrieving and parsing the Json feed - it's not essential, but I've used the library for a long time and I like the implementation.
\nWhile I absolutely support AbstractAPI's model, which is generous, and absolutely recommend subscribing to their paid plans - I wanted to be sure not to lock anyone in to a subscription to benefit from this feature. The intent isn't to cheat AbstractAPI out of money, but I needed to be able to provide an API while giving people choice. At the very least, it's a chance to evaluate their API before committing to subscription.
\nThe free tier provides 1000 requests per month, and 1 request per second. There are some restrictions on the free plan - for example, you can only query the current year - but it does perfectly match our needs. In fact, the only \"problem\" is the 1 request per second limitation.
\nWe can ensure that we stay within the 1000 request per month - simply by only checking for public holidays once per day, per instance. That's easy, and unless you have a huge number of Event Timeout instances, you'll stay under that limit.
\nIf you configure multiple instances of Event Timeout with public holiday detection, though, you would likely run afoul of the 1 request per second. I accounted for that by allowing for a retry. If an error occurs when using the free plan, it's most likely that this is the API requests per second limit being reached, so we retry up to 10 times with a 10 second delay between each attempt.
\nThat works, and works well, and means we have a fairly robust effort to ensure that public holidays are evaluated. We could still hit a limit, for example if there's more than 10 instances. We could make this yet another configurable item quite easily, but a reasonable limit of 10 seemed appropriate (and perhaps encouraging people to give AbstractAPI money! 😀)
\nWhile Event Timeout uses UTC timing internally, we handle the retrieval of public holidays as a local datetime - at midnight, retrieve today's public holiday list, and filter it based on the holiday type and locale settings. If we're currently in the middle of a \"showtime\" - the time between a start and end time - it will be retrieved when that showtime has ended. As noted, Event Timeout is forward looking, so we only evaluate public holidays against the next start time.
\nThere's a lot of thought and effort behind Event Timeout, and I'm quite proud of how it turned out. It certainly meets our specific needs, and if a problem is defined, we can readily map its existing features to the solution. I certainly hope that others can make use of it too!
", "author": { "name": "MattMofDoom" }, "tags": [ "Teams", "Structured logging", "Seq", "Public Holidays", "OpsGenie", "Jira", "Event Timeout", "Email", "C#", "Apps" ], "date_published": "2021-06-05T20:16:03-07:00", "date_modified": "2022-01-22T16:07:42-08:00" }, { "id": "https://mattmofdoom.com/setting-jira-priority-and-labels-with-opsgenie-edge-connector/", "url": "https://mattmofdoom.com/setting-jira-priority-and-labels-with-opsgenie-edge-connector/", "title": "Setting Jira Priority and Labels with OpsGenie Edge Connector", "summary": "The default OpsGenie integration with Jira Service Management has a puzzling omission when it comes to their OpsGenie Edge Connector integration - it doesn't send Priority or Tags. This means that tags won't pass through as labels, and the priority will be at the default. My ideal is that the Jira priority maps directly to the OpsGenie priority, especially when SLAs are set against those priorities. Tags and labels power automation, reporting, and documentation in our environment - so passing them through is a must. Now obviously Atlassian is pivoting to retire their server products and offer \"cloud only\" by…", "content_html": "\n
The default OpsGenie integration with Jira Service Management has a puzzling omission when it comes to their OpsGenie Edge Connector integration - it doesn't send Priority or Tags. This means that tags won't pass through as labels, and the priority will be at the default.
\nMy ideal is that the Jira priority maps directly to the OpsGenie priority, especially when SLAs are set against those priorities. Tags and labels power automation, reporting, and documentation in our environment - so passing them through is a must.
\nNow obviously Atlassian is pivoting to retire their server products and offer \"cloud only\" by 2024, while working to more closely integrate OpsGenie with their Jira Service Management offering, but for those still using an on-premise server and OpsGenie, this might be of help.
\nI haven't had cause to do a lot with Python before, but I modified the actionExecutor.py to perform a query against the OpsGenie Alerts API before creating the Jira ticket. It retrieves the priority and tags, maps OpsGenie priorities to the standard Jira priorities (eg. P1 = Highest), and adds the tags as Jira labels.
\nI also add a static \"OpsGenie\" label, because I have some Jira Automation rules that work against that label to fire off webhooks back to OpsGenie.
\nIn our environment, \"P5\" priorities don't exist - so I map a P5 to Low rather than Lowest. That's relatively trivial to change in the script if you need to.
\nThe amended script is below, or you can grab it from my Github fork.
\nimport argparse\nimport json\nimport logging\nimport sys\n\nimport requests\nfrom requests.auth import HTTPBasicAuth\n\nparser = argparse.ArgumentParser()\nparser.add_argument('-payload', '--queuePayload', help='Payload from queue', required=True)\nparser.add_argument('-apiKey', '--apiKey', help='The apiKey of the integration', required=True)\nparser.add_argument('-opsgenieUrl', '--opsgenieUrl', help='The url', required=True)\nparser.add_argument('-logLevel', '--logLevel', help='Level of log', required=True)\nparser.add_argument('-url', '--url', help='URL', required=False)\nparser.add_argument('-username', '--username', help='Username', required=False)\nparser.add_argument('-password', '--password', help='Password', required=False)\nparser.add_argument('-key', '--key', help='Project key', required=False)\nparser.add_argument('-issueTypeName', '--issueTypeName', help='Issue Type', required=False)\nargs = vars(parser.parse_args())\n\nlogging.basicConfig(stream=sys.stdout, level=args['logLevel'])\n\n\ndef parse_field(key, mandatory):\n variable = queue_message.get(key)\n if not variable:\n variable = args.get(key)\n if mandatory and not variable:\n logging.error(LOG_PREFIX + \" Skipping action, Mandatory conf item '\" + key +\n \"' is missing. Check your configuration file.\")\n raise ValueError(LOG_PREFIX + \" Skipping action, Mandatory conf item '\" + key +\n \"' is missing. Check your configuration file.\")\n return variable\n\n\ndef parse_timeout():\n parsed_timeout = args.get('http.timeout')\n if not parsed_timeout:\n return 30000\n return int(parsed_timeout)\n\n\ndef get_transition_id(request_headers, jira_url, transition_name, token):\n transition_id = str()\n response = requests.get(jira_url, None, headers=request_headers, auth=token, timeout=timeout)\n body = response.json()\n if body != {} and response.status_code < 299:\n transition_list = body[\"transitions\"]\n for transition in transition_list:\n to = transition['to']\n if transition_name == to['name']:\n transition_id = transition['id']\n logging.info(LOG_PREFIX + \" Successfully executed at Jira Service Desk\")\n logging.debug(\n LOG_PREFIX + \" Jira Service Desk response: \" + str(response.status_code) + \" \" + str(response.content))\n else:\n logging.error(\n LOG_PREFIX + \" Could not execute at Jira Service Desk; response: \" + str(\n response.content) + \" status code: \" + str(response.status_code))\n if transition_id:\n return transition_id\n else:\n logging.debug(LOG_PREFIX + \" Transition id is empty\")\n\ndef get_alert(opsgenieUrl, api_Key, alert_id):\n headers = {\n \"Content-Type\": \"application/json\",\n \"Accept-Language\": \"application/json\",\n \"Authorization\": \"GenieKey \" + args.get('apiKey')\n }\n \n alert_api_url = opsgenieUrl + \"/v2/alerts/\" + alert_id + '?identifierType=id'\n alert_response = requests.get(alert_api_url, headers=headers, timeout=timeout)\n if alert_response.status_code < 299:\n logging.info(LOG_PREFIX + \" Successfully requested alert from Opsgenie\")\n logging.debug(\n LOG_PREFIX + \"OpsGenie response: \" + str(alert_response.content) + \" \" + str(\n alert_response.status_code))\n return alert_response.json()\n else:\n logging.warning(\n LOG_PREFIX + \" Could not execute at Opsgenie; response: \" + str(\n alert_response.content) + \" status code: \" + str(alert_response.status_code))\n\n\ndef get_tags(alert):\n return alert['data']['tags']\n\ndef get_priority(alert):\n priorities = {\"P1\":\"Highest\",\"P2\":\"High\",\"P3\":\"Medium\",\"P4\":\"Low\",\"P5\":\"Low\"}\n priorityMap = \"{\" + alert[\"data\"][\"priority\"] + \"}\"\n logging.debug(priorityMap + ' result: ' + priorityMap.format_map(priorities))\n return priorityMap.format_map(priorities)\n\ndef main():\n global LOG_PREFIX\n global queue_message\n global timeout\n\n queue_message_string = args['queuePayload']\n queue_message = json.loads(queue_message_string)\n\n logging.debug(str(queue_message))\n\n alert_id = queue_message[\"alert\"][\"alertId\"]\n mapped_action = queue_message[\"mappedActionV2\"][\"name\"]\n\n LOG_PREFIX = \"[\" + mapped_action + \"]\"\n logging.info(\"Will execute \" + mapped_action + \" for alertId \" + alert_id)\n\n timeout = parse_timeout()\n url = parse_field('url', True)\n username = parse_field('username', True)\n password = parse_field('password', True)\n project_key = parse_field('key', False)\n issue_type_name = parse_field('issueTypeName', False)\n\n issue_key = queue_message.get(\"IssueKey\")\n\n logging.debug(\"Url: \" + str(url))\n logging.debug(\"Username: \" + str(username))\n logging.debug(\"Project Key: \" + str(project_key))\n logging.debug(\"Issue Type: \" + str(issue_type_name))\n logging.debug(\"Issue Key: \" + str(issue_key))\n\n content_params = dict()\n\n token = HTTPBasicAuth(username, password)\n headers = {\n \"Content-Type\": \"application/json\",\n \"Accept-Language\": \"application/json\"\n }\n\n result_url = url + \"/rest/api/2/issue\"\n\n if mapped_action == \"addComment\":\n content_params = {\n \"body\": queue_message.get('body')\n }\n result_url += \"/\" + str(issue_key) + \"/comment\"\n elif mapped_action == \"createIssue\":\n toLabel = queue_message.get(\"alias\")\n alert = get_alert(args.get('opsgenieUrl'), args.get('apiKey'), alert_id)\n priority = get_priority(alert)\n labels = get_tags(alert)\n labels.append(toLabel)\n labels.append(\"OpsGenie\")\n content_params = {\n \"fields\": {\n \"project\": {\n \"key\": project_key\n },\n \"issuetype\": {\n \"name\": issue_type_name\n },\n \"summary\": queue_message.get(\"summary\"),\n \"description\": queue_message.get(\"description\"),\n \"priority\": { \"name\": priority },\n \"labels\": labels\n }\n }\n elif mapped_action == \"resolveIssue\":\n result_url += \"/\" + str(issue_key) + \"/transitions\"\n content_params = {\n \"transition\": {\n \"id\": get_transition_id(headers, result_url, \"Resolved\", token)\n },\n \"fields\": {\n \"resolution\": {\n \"name\": \"Done\"\n }\n }\n }\n\n logging.debug(str(content_params)) \n response = requests.post(result_url, data=json.dumps(content_params), headers=headers, auth=token, timeout=timeout)\n if response.status_code < 299:\n logging.info(\"Successfully executed at Jira Service Desk\")\n if mapped_action == \"createIssue\":\n if response.json():\n issue_key_from_response = response.json()['key']\n if issue_key_from_response:\n alert_api_url = args.get('opsgenieUrl') + \"/v2/alerts/\" + alert_id + \"/details\"\n content = {\n \"details\":\n {\n \"issueKey\": issue_key_from_response\n }\n }\n headers = {\n \"Content-Type\": \"application/json\",\n \"Accept-Language\": \"application/json\",\n \"Authorization\": \"GenieKey \" + args.get('apiKey')\n }\n alert_response = requests.post(alert_api_url,\n data=json.dumps(content), headers=headers, timeout=timeout)\n if alert_response.status_code < 299:\n logging.info(LOG_PREFIX + \" Successfully sent to Opsgenie\")\n logging.debug(\n LOG_PREFIX + \" Jira Service Desk response: \" + str(alert_response.content) + \" \" + str(\n alert_response.status_code))\n else:\n logging.warning(\n LOG_PREFIX + \" Could not execute at Opsgenie; response: \" + str(\n alert_response.content) + \" status code: \" + str(alert_response.status_code))\n else:\n logging.warning(\n LOG_PREFIX + \" Jira Service Desk response is empty\")\n else:\n logging.warning(\n LOG_PREFIX + \" Could not execute at Jira Service Desk; response: \" + str(\n response.content) + \" status code: \" + str(response.status_code))\n\n\nif __name__ == '__main__':\n main()\n
",
"author": {
"name": "MattMofDoom"
},
"tags": [
"Python",
"OpsGenie",
"OEC",
"Jira"
],
"date_published": "2021-05-23T21:51:42-07:00",
"date_modified": "2021-05-23T23:38:12-07:00"
},
{
"id": "https://mattmofdoom.com/structured-logging-with-seq-and-serilog/",
"url": "https://mattmofdoom.com/structured-logging-with-seq-and-serilog/",
"title": "Structured Logging with Seq and Serilog",
"summary": "A few years back, I picked up an old \"unloved\" business application for document handling, and brought it into the modern era. I completed some work on adding automated OCR, running as a service, and then started to enhance it well beyond its original capabilities, such as moving a manual letter creation and printing process into the application, and creating a fully automated letter generation and batch printing process, with support for use of a letter folding machine. The business rather enthusiastically responded to my efforts, and raised request upon request for enhancements and new functionality. I soon found that…",
"content_html": "A few years back, I picked up an old \"unloved\" business application for document handling, and brought it into the modern era. I completed some work on adding automated OCR, running as a service, and then started to enhance it well beyond its original capabilities, such as moving a manual letter creation and printing process into the application, and creating a fully automated letter generation and batch printing process, with support for use of a letter folding machine.
\nThe business rather enthusiastically responded to my efforts, and raised request upon request for enhancements and new functionality. I soon found that the application had been uplifted from a neglected web application to an all-singing, all-dancing service that the business relied upon.
\nI was working on the next iteration - uplifting the old ASP.NET 2.0 Web Forms web interface to a modern web client. The old web app used to directly manipulate files, but as I now had a Windows service that did all the \"heavy lifting\", I envisaged that we could pivot to an API workflow-based approach, where the API would accept the request, the service would work on the result, and the output could then be retrieved using the API. Since we were working with OCR, file conversion, and other 'expensive' operations, I wanted a fast and responsive experience for clients that didn't need to wait for completion.
\nAs I already had many moving parts, and was about to open up even more, I wanted the logging to be rock solid. The application logging story in our various apps was relatively poor - where it existed, it was stuck in obscure text log files throughout the environment. Troubleshooting and debugging was difficult at best.
\nA colleague at the time directed me towards Seq, a structured logging server that is relatively inexpensive. In fact, a free single user license is built in, which makes it insanely easy to get up and running and start working on getting your logs into Seq. Topping it off, Serilog — simple .NET logging with fully-structured events is an open source library originally created by the developers of Seq. It makes logging to multiple log sinks really simple, and - of course - one of those sinks can be Seq.
\nI was busily integrated email alerting as well and rather taken with the idea of fluent code. I was working with libraries like Flurl (and Flurl.Http) and FluentEmail, and I liked the idea of making my logging read in a similar way that integrated with the rest of my code in the common library.
\nI implemented logging in a class that wrapped Serilog into a somewhat 'Fluent-like' implementation - it doesn't chain so it's not strictly fluent - that allowed me to make calls like:
\nLogging.Log.Level().Add(Logging.appLogServiceState, Config.serviceName, serviceStarted, nlbStatus, Common.NLBStart);
Logging.Log.Level(LogStatus.Error).Add(Logging.appLogClusterNotFound, Config.targetCluster);
Logging.Log.Exception(ex).Add(Logging.appLogError, ex.Message);
This approach allowed me to do a few things, such as enforce setting a log level (if not specified, it's Information by default) or reflect an exception, capture the method name that sent the log entry, and control how/if the method name is output to the logs.
\nThis was useful for my implementation, which allowed output to a rolling text log, Windows event log, console (using colored themes), and Seq. One or all methods could be enabled - so a console app might use console and text log, while a service or web interface might use text log, Windows event log, and Seq. Each method works within its own limitations - so the console and text log will output just what the message template says, while Windows event log and especially Seq are capable of exposing more properties.
\nThe code looks to make sure that each sink is appropriately configured and can be instantiated when the static Logging class is first instantiated. Equally - because I have a standard log interface - I can add a standard set of Serilog Enrichers which automatically capture environment information. That means that all we need to worry about is sending our log entries, and I'll get useful and meaningful logs in Seq, as shown below (I've blanked out a couple of lines).
I have been using the code for a number of years, but while writing this, I realised that I wasn't actually exposing the method and line number to Seq as properties... so I've added that and will use this enhancement as I move logging into its own common library. One of the faults of this implementation has been the need to 're-implement' it for subsequent applications. As responsibility for ongoing development has shifted, and I've moved on to the next thing, my improvements get captured in that \"next thing\" and rarely backported to the previous application. A common library would help with that, and also address issues such as updating of dependencies - I noticed while writing this that the Serilog Rolling File sink has been deprecated in favour of the Serilog File sink that now has that functionality.
\nThis version of the logging class has its message templates embedded as readonly strings, but I often use resource files to store the template strings. I like not having strings throughout the code wherever possible.
\nNaturally there are many ways that you could implement Serilog, but this gives me a consistent implementation where logging, once configured, is as simple as a single method call that encapsulates all that I want to include.
\nWe have come a long way on our journey with Seq, and I'll post more on that later - but Seq is nothing without logging, and this is one example of how to make that happen.
\nNote - There's a couple of errors in the below code because I updated it on the fly without testing, but I've moved on to my common logging library effort ... I'll post about Lurgle.Logging later.
\nusing System;\nusing System.IO;\nusing System.Threading;\nusing System.Collections.Generic;\nusing System.Runtime.CompilerServices;\nusing Serilog;\nusing Serilog.Core;\nusing Serilog.Events;\nusing Serilog.Sinks.SystemConsole.Themes;\n\n\nnamespace NLBManager\n{\n /// <summary>\n /// Supported log types\n /// </summary>\n public enum LogType\n {\n Console = 1,\n File = 2,\n EventLog = 4,\n Seq = 8,\n All = -1\n }\n\n /// <summary>\n /// Outlines the supported log levels. Abstracts Serilog's <see cref=\"LogEventLevel\"/> so that it does not need to be referenced outside of the <see cref=\"Logging\"/> class.\n /// </summary>\n public enum LogStatus\n {\n Fatal = LogEventLevel.Fatal,\n Error = LogEventLevel.Error,\n Warning = LogEventLevel.Warning,\n Information = LogEventLevel.Information,\n Debug = LogEventLevel.Debug,\n Verbose = LogEventLevel.Verbose\n\n }\n\n /// <summary>\n /// Logging class\n /// </summary>\n public static class Logging\n {\n private static Logger logWriter = null;\n private static readonly ReaderWriterLockSlim readWriteLock = new ReaderWriterLockSlim();\n\n public static readonly string dateTimeFormat = \"dd-MMyyyy H:mm:ss\";\n public static readonly string dateFormat = \"dd-MM-yyyy\";\n public static readonly string logFormatEvent = \"({ThreadId}) {Message}{NewLine}{NewLine}{Exception}\";\n public static readonly string logFormatFile = \"{Timestamp:dd-MM-yyyy HH:mm:ss}: ({ThreadId}) [{Level}] {Message}{NewLine}\";\n public static readonly string logFormatMessageOnly = \"{Message:l}{NewLine}\";\n public static readonly string logInitialise = \"Initialising event sources ...\";\n public static readonly string logApplication = \"Application\";\n public static readonly string logDate = \"{Date}\";\n public static readonly string fileNameLog = \"{0}-{1}.log\";\n public static readonly string logText = \"[{0}] {1}\";\n\n public static readonly string appLogServiceName = \"Configured Service to watch: {ServiceName:l}, Running: {ServiceRunning}\";\n public static readonly string appLogClusterName = \"Configured Cluster to control: {ClusterName:l}, Cluster IP: {ClusterIp:l}, Current State: {NLBStatus}\";\n public static readonly string appLogServiceState = \"Service: {ServiceName:l}, Started: {ServiceStarted}, NLB Status: {NLBStatus}, Perform NLB Action: {NLBAction:l}\";\n public static readonly string appLogServiceNotFound = \"Service not found: {ServiceName:l}, Stopping ...\";\n public static readonly string appLogClusterNotFound = \"Target cluster not found: {ClusterName:l}, Stopping ...\";\n public static readonly string appLogClusterIpNotFound = \"Target cluster IP address not found: {ClusterName:l}, Stopping ...\";\n public static readonly string appLogError = \"Error: Exception {Exception:l}\";\n public static readonly string appLogStart = \"{Service:l} v{Version:l} Started\";\n public static readonly string appLogStop = \"{Service:l} v{Version:l} Stopped\";\n public static readonly string appLogStopError = \"{Service:l} v{Version:l} Stopped (Error: {Error})\";\n public static readonly string appLogUnexpectedStatusCode = \"Unexpected NLB status code: {StatusCode:l}\";\n\n static Logging()\n {\n //Initialise email and loggings\n if (Config.doLog)\n Log.Init();\n }\n\n\n\n public static SystemConsoleTheme getConsoleTheme { get; } = new SystemConsoleTheme(\n new Dictionary<ConsoleThemeStyle, SystemConsoleThemeStyle>\n {\n [ConsoleThemeStyle.Text] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.White },\n [ConsoleThemeStyle.SecondaryText] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.Gray },\n [ConsoleThemeStyle.TertiaryText] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.White },\n [ConsoleThemeStyle.Invalid] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.Yellow },\n [ConsoleThemeStyle.Null] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.Red },\n [ConsoleThemeStyle.Name] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.Blue },\n [ConsoleThemeStyle.String] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.Gray },\n [ConsoleThemeStyle.Number] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.Cyan },\n [ConsoleThemeStyle.Boolean] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.Green },\n [ConsoleThemeStyle.Scalar] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.Magenta },\n [ConsoleThemeStyle.LevelVerbose] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.Gray },\n [ConsoleThemeStyle.LevelDebug] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.Gray },\n [ConsoleThemeStyle.LevelInformation] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.White },\n [ConsoleThemeStyle.LevelWarning] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.Yellow },\n [ConsoleThemeStyle.LevelError] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.White, Background = ConsoleColor.Red },\n [ConsoleThemeStyle.LevelFatal] = new SystemConsoleThemeStyle { Foreground = ConsoleColor.White, Background = ConsoleColor.Red }\n });\n\n /// <summary>\n /// Provides an interface to log new events. If the application uses logging, you should call a <see cref=\"Close\"/> on shutdown to flush and dispose the logWriter.\n /// </summary>\n public class Log : Attribute\n {\n private bool isMethod { get; set; }\n private string methodName { get; set; }\n private int lineNumber { get; set; }\n private LogStatus logLevel { get; set; }\n private Exception errorInfo { get; set; }\n\n public Log()\n {\n isMethod = true;\n methodName = string.Empty;\n logLevel = LogStatus.Information;\n errorInfo = null;\n }\n\n /// <summary>\n /// Flush logs and dispose the logging interface. Used for application shutdown. <para/>\n /// \n /// If this is called and then an attempt is made to write to the log, the log will be automatically initialised again.\n /// </summary>\n public static void Close()\n {\n if (logWriter != null)\n logWriter.Dispose();\n\n logWriter = null;\n }\n\n private static LoggerConfiguration getConfig()\n {\n return new LoggerConfiguration()\n .Enrich.FromLogContext()\n .Enrich.WithThreadId()\n .Enrich.WithEnvironmentUserName()\n .Enrich.WithMachineName()\n .Enrich.WithProcessId()\n .Enrich.WithProcessName()\n .Enrich.WithProperty(\"AppName\", Common.appName)\n .Enrich.WithProperty(\"ServiceName\", Config.serviceName)\n .Enrich.WithProperty(\"TargetCluster\", Config.targetCluster);\n }\n\n /// <summary>\n /// Initialise the logging interface. Checks that the configured log types are available.\n /// </summary>\n public static void Init()\n {\n LoggerConfiguration logConfig = null;\n bool manageSource = true;\n\n string logFolder = string.Empty;\n string fileName = string.Empty;\n\n List<LogType> logTypes = Config.logType;\n\n //If event log is enabled, test that we can create sources and/or write logs\n if (logTypes.Contains(LogType.EventLog))\n {\n try\n {\n //First test whether we can create new event source .. should also work if the source exists\n LoggerConfiguration testConfig = getConfig()\n .WriteTo.EventLog(Common.appName, logApplication, \".\", manageSource, logFormatEvent, null, LogEventLevel.Verbose);\n\n Logger testWriter = testConfig.CreateLogger();\n testWriter.Information(logInitialise);\n testWriter.Dispose();\n }\n catch\n {\n manageSource = false;\n }\n\n //If we can't manage the source, can we still write an event log entry?\n if (!manageSource)\n try\n {\n LoggerConfiguration testConfig = getConfig()\n .WriteTo.EventLog(Common.appName, logApplication, \".\", manageSource, logFormatEvent, null, LogEventLevel.Verbose);\n\n Logger testWriter = testConfig.CreateLogger();\n testWriter.Information(logInitialise);\n testWriter.Dispose();\n }\n catch\n {\n //Remove event log from the usable types and send an alert\n logTypes.Remove(LogType.EventLog);\n }\n }\n\n\n //If file is enabled, test that folder and file access is okay\n if (logTypes.Contains(LogType.File))\n {\n bool isLog = true;\n\n if (string.IsNullOrEmpty(logFolder) || !Directory.Exists(logFolder))\n {\n if (!string.IsNullOrEmpty(logFolder))\n try\n {\n Directory.CreateDirectory(logFolder);\n }\n catch (Exception ex)\n {\n isLog = false;\n }\n }\n else\n try\n {\n fileName = Path.Combine(logFolder, string.Format(fileNameLog, Config.logName, logDate));\n\n LoggerConfiguration testConfig = getConfig()\n .WriteTo.RollingFile(fileName, Config.logLevelFile, logFormatFile, retainedFileCountLimit: Config.logMonths * 31,\n shared: true, buffered: false, flushToDiskInterval: new TimeSpan(0, 0, 1));\n\n Logger testWriter = testConfig.CreateLogger();\n testWriter.Information(logInitialise);\n testWriter.Dispose();\n }\n catch\n {\n isLog = false;\n }\n\n\n if (!isLog)\n {\n //Remove file from the usable types and send an alert\n logTypes.Remove(LogType.File);\n }\n }\n\n if (logTypes.Contains(LogType.Seq))\n try\n {\n LoggerConfiguration testConfig = getConfig();\n\n if (Config.isSeqApiKey)\n testConfig.WriteTo.Seq(Config.logSeqServer, apiKey: Config.logSeqApiKey, compact: true);\n else\n testConfig.WriteTo.Seq(Config.logSeqServer, compact: true);\n\n\n Logger testWriter = testConfig.CreateLogger();\n testWriter.Information(logInitialise);\n testWriter.Dispose();\n }\n catch\n {\n //Remove Seq from the usable types and send an alert\n logTypes.Remove(LogType.Seq);\n }\n\n //With all that out of the way, we can create the final log config\n if (logTypes.Count.Equals(0))\n logConfig = null;\n else\n logConfig = getConfig();\n\n if (logTypes.Contains(LogType.Console))\n logConfig.WriteTo.Console(Config.logLevelConsole, logFormatMessageOnly, theme: SystemConsoleTheme.Literate);\n\n if (logTypes.Contains(LogType.File))\n logConfig.WriteTo.RollingFile(fileName, Config.logLevelFile, logFormatFile,\n retainedFileCountLimit: Config.logMonths * 31, shared: true, buffered: false, flushToDiskInterval: new TimeSpan(0, 0, 1));\n\n if (logTypes.Contains(LogType.EventLog))\n logConfig.WriteTo.EventLog(Common.appName, logApplication, \".\",\n manageSource, logFormatEvent, null, Config.logLevelEvent);\n\n if (logTypes.Contains(LogType.Seq))\n {\n if (Config.isSeqApiKey)\n logConfig.WriteTo.Seq(Config.logSeqServer, apiKey: Config.logSeqApiKey);\n else\n logConfig.WriteTo.Seq(Config.logSeqServer);\n }\n\n if (logConfig != null)\n logWriter = logConfig.CreateLogger();\n else\n {\n logWriter = null;\n Config.doLog = false;\n }\n }\n\n /// <summary>\n /// Instantiate a new <see cref=\"Log\"/> class with the desired log level.<para/>\n /// \n /// Logging will automatically be initialised if this is the first time this is called in the code, or if logging has been disposed by a call to <see cref=\"Close\"/><para/>\n /// \n /// This will automatically capture the calling method and add it to the log entry, unless showMethod is set to false.\n /// </summary>\n /// <param name=\"logLevel\">Desired log level for this event</param>\n /// <param name=\"showMethod\">Add the calling method to the log text</param>\n /// <param name=\"methodName\"></param>\n /// <returns></returns>\n public static Log Level(LogStatus logLevel = LogStatus.Information, bool showMethod = true, [CallerMemberName] string methodName = null, [CallerLineNumber] int sourceLineNumber = 0)\n {\n if (Config.doLog && logWriter == null)\n Init();\n\n return new Log() { logLevel = logLevel, isMethod = showMethod, methodName = methodName, lineNumber = sourceLineNumber };\n }\n\n /// <summary>\n /// Instantiate a new <see cref=\"Log\"/> class with a <see cref=\"LogStatus.Error\" /> log level, and pass the Exception into Serilog to handle.\"<para/>\n /// \n /// Logging will automatically be initialised if this is the first time this is called in the code, or if logging has been disposed by a call to <see cref=\"Close\"/><para/>\n /// \n /// This will automatically capture the calling method and add it to the log entry, unless showMethod is set to false.\n /// </summary>\n /// <param name=\"ex\">Exception to pass to Serilog</param>\n /// <param name=\"showMethod\">Add the calling method to the log text</param>\n /// <param name=\"methodName\">Automatically captures the calling method via [CallerMemberName]</param>\n /// <returns></returns>\n public static Log Exception(Exception ex, bool showMethod = true, [CallerMemberName] string methodName = null, [CallerLineNumber] int sourceLineNumber = 0)\n {\n if (Config.doLog && logWriter == null)\n Init();\n\n return new Log() { logLevel = LogStatus.Error, isMethod = showMethod, methodName = methodName, lineNumber = sourceLineNumber, errorInfo = ex };\n }\n\n /// <summary>\n /// Add a new log entry using a log template that has no parameters\n /// </summary>\n /// <param name=\"logEntry\"></param>\n public void Add(string logEntry)\n {\n string text;\n if (isMethod)\n text = string.Format(logText, methodName, logEntry);\n else\n text = logEntry;\n\n if (Config.doLog && logWriter != null)\n { \n if (errorInfo != null)\n logWriter.ForContext(\"MethodName\", methodName).ForContext(\"LineNumber\", lineNumber).Write((LogEventLevel)logLevel, errorInfo, text);\n else\n logWriter.ForContext(\"MethodName\", methodName).ForContext(\"LineNumber\", lineNumber).Write((LogEventLevel)logLevel, text);\n }\n }\n\n /// <summary>\n /// Add a new log entry and apply parameters to the supplied log template\n /// </summary>\n /// <param name=\"logTemplate\">Log template that parameters will be applied to</param>\n /// <param name=\"args\">Parameters for the log template</param>\n public void Add(string logTemplate, params object[] args)\n {\n string text;\n if (isMethod)\n text = string.Format(logText, methodName, logTemplate);\n else\n text = logTemplate;\n\n if (Config.doLog && logWriter != null)\n {\n if (errorInfo != null)\n logWriter.ForContext(\"MethodName\", methodName).ForContext(\"LineNumber\", lineNumber).Write((LogEventLevel)logLevel, errorInfo, text, args);\n else\n logWriter.ForContext(\"MethodName\", methodName).ForContext(\"LineNumber\", lineNumber).Write((LogEventLevel)logLevel, text, args);\n }\n }\n }\n
",
"author": {
"name": "MattMofDoom"
},
"tags": [
"Structured logging",
"Serilog",
"Seq",
"C#"
],
"date_published": "2021-05-23T18:18:07-07:00",
"date_modified": "2021-06-02T00:08:52-07:00"
},
{
"id": "https://mattmofdoom.com/blogging-part-7000/",
"url": "https://mattmofdoom.com/blogging-part-7000/",
"title": "Blogging Part 7000",
"summary": "I've created a new blog using Publii, Github Pages, and CloudFlare Pages. Why? Because it's interesting, it's free, and it gives me somewhere to put up my thoughts. I've been doing a bunch of interesting things lately and I plan to share them! Previous blogs are not exist. They are not a thing. Ignore the man behind the curtain.",
"content_html": "I've created a new blog using Publii, Github Pages, and CloudFlare Pages.
\nWhy? Because it's interesting, it's free, and it gives me somewhere to put up my thoughts.
\nI've been doing a bunch of interesting things lately and I plan to share them!
\nPrevious blogs are not exist. They are not a thing. Ignore the man behind the curtain.
\n", "author": { "name": "MattMofDoom" }, "tags": [ "Wheeeee", "New", "MattMofDoom", "Blog" ], "date_published": "2021-05-21T19:13:14-07:00", "date_modified": "2021-05-22T16:25:42-07:00" } ] }