Cloudflare parser bug

So once again we have another major security leak. You can read about it, here and below is the email CEO Matthew Prince wrote to customers:

Dear Cloudflare Customer:

Thursday afternoon, we published a blog post describing a memory leak caused by a serious bug that impacted Cloudflare’s systems. If you haven’t yet, I encourage you to read that post on the bug:

https://blog.cloudflare.com/incident-report-on-memory-leak-caused-by-cloudflare-parser-bug/

While we resolved the bug within hours of it being reported to us, there was an ongoing risk that some of our customers’ sensitive information could still be available through third party caches, such as the Google search cache.

Over the last week, we’ve worked with these caches to discover what customers may have had sensitive information exposed and ensure that the caches are purged. We waited to disclose the bug publicly until after these caches could be cleared in order to mitigate the ability of malicious individuals to exploit any exposed data.

In our review of these third party caches, we discovered data that had been exposed from approximately 150 of Cloudflare’s customers across our Free, Pro, Business, and Enterprise plans. We have reached out to these customers directly to provide them with a copy of the data that was exposed, help them understand its impact, and help them mitigate that impact.

Fortunately, your domain is not one of the domains where we have discovered exposed data in any third party caches. The bug has been patched so it is no longer leaking data. However, we continue to work with these caches to review their records and help them purge any exposed data we find. If we discover any data leaked about your domains during this search, we will reach out to you directly and provide you full details of what we have found.

To date, we have yet to find any instance of the bug being exploited, but we recommend if you are concerned that you invalidate and reissue any persistent secrets, such as long lived session identifiers, tokens or keys. Due to the nature of the bug, customer SSL keys were not exposed and do not need to be rotated.

Again, if we discover new information that impacts you, we will reach out to you directly. In the meantime, if you have any questions or concerns, please don’t hesitate to reach out.

Matthew Prince
Cloudflare, Inc.
Co-founder and CEO

So lets be clear

  • …the greatest period of impact was from February 13 and February 18 with around 1 in every 3,300,000 HTTP requests through Cloudflare potentially resulting in memory leakage (that’s about 0.00003% of requests).
  • Only customers who use Automatic HTTPS Rewrites, Server-Side Excludes and Email Obfuscation were affected.
  • …data that had been exposed from approximately 150 of Cloudflare’s customers across Free, Pro, Business, and Enterprise plans
  • CloudFlare is SaaS
  • Security hole was completely closed in 7hrs 11mins from being Tweeted about an issue
  • Security hole was mostly closed off in 1 hr 8mins
  • Production fix and service restored in 3 days 10 hrs 9mins
  • People are jumping on the problem making it sound worse then it was (don’t get me wrong it was bad, but no where as bad as Heartbleed, Heartbleed, still IS a problem)
  • CloudFlare have been very transparent

…And this is why I review code I take on-board, regardless if it works and advise others to review my code

 

Caching woes

Caching always seems to cause problems, still, we can’t have it all. Today’s caching problem was to do with Redgate SQL Prompt, a really amazing plugin that helps you write better SQL code. The problem with it is the cache of database object metadata was out-of-date. I had updated a table so when I typed select * then press TAB to expand the * into a list of columns, I got the old names. Luckily the fix is easy. Refresh suggestions.

As the screenshot shows, its either SQL Prompt > Refresh suggestions or just Ctrl + Shift +D.

Date Format

I’ve had a problem recently when it can to formatting a datetime as just date where FormatDateTime didn’t work

The fix was to change it to Format.

Oddly, despite the parameter being a datetime, I found I still had to cast it as a date.

FormatDateTime was introduced in Vs 2008. I’m not 100% sure why this didn’t work.

TextBoxImpl

Another error I hit

Warning               1

[rsRuntimeErrorInExpression]

The Value expression for the textrun ‘Textbox28.Paragraphs[0].TextRuns[0]’

contains an error: Overload resolution failed because no Public ‘/’ can be called with these arguments:

‘Public Shared Operator /(d1 As Decimal, d2 As Decimal) As Decimal’:

Argument matching parameter ‘d2’ cannot convert from ‘TextBoxImpl’ to ‘Decimal’.

C:\Projects\Reports\1. Report.rdl

Thankfully Google found Qiuyun answer. I was missing the .Value at the end.

So (wrong)

=ReportItems!Textbox1

Fixed (working)

=ReportItems!Textbox1.Value

SSRS copy and paste fail

In case you haven’t heard, I’ve started a new job and one of my first tasks was to speed up a SSRS report. One of the first issues I stumbled across was this:

An error occurred during local report processing.

The definition of the report ‘/1. Report’ is invalid.

The Value expression for the textrun ‘Tx412.Paragraphs[0].TextRuns[0]’ contains an error: [BC30456] ‘RdlObjectModel’ is not a member of ‘ReportingServices’

Looking into it the issue it appears its expanded expressions when it got copied – it’s basically making the it fully qualified.

So

Cint

becomes:

Microsoft.ReportingServices.RdlObjectModel.ExpressionParser.VBFunctions.Cint

The fix was to just edit to code then do a find and replace, replacing the added reference with nothing. Simple.

SSRS 2016 by default

For the past 4 months I’ve been using Visual Studio 2015 rather then Visual Studio 2013 and yesterday, I hit a bug with SSRS (shows how much I love SSRS).

The bug appeared when it came to deployment

The error was deploying SSRS report

Exception calling “CreateCatalogItem” with “7” argument(s): “The definition of this report is not valid or supported
by this version of Reporting Services. The report definition may have been created with a later version of Reporting
Services, or contain content that is not well-formed or not valid based on Reporting Services schemas. Details: The
report definition has an invalid target namespace
‘http://schemas.microsoft.com/sqlserver/reporting/2016/01/reportdefinition’ which cannot be upgraded.

The report that error deploying was the one I had just changed. Looking at the code diff in Visual Studio Team Services you can see its added in some new 2016 goodness. I checked and double-checked my settings. Visual Studio 2015 wants to give you a 2016 file. If you manually deploy from within Visual Studio it works. Checking out the bin\debug folder reveals that it’s using the 2016 version to build a legacy version, which makes sense as the 2016 is the old (2008R2) with extra goodness slapped in – like being able to define parameter location.

So, we now have to build our SSRS reports. To do so, I’ve just created a PowerShell script that builds it using devenv.exe with the parameters for a silent build. This worked great, only it took over 50mins to run. The main lag was every time it loaded the solution it builds\checks\something with the database projects. It takes an age. The fix was to create a dedicated Report solution file (Report.sln) – I did this with a PowerShell script that copying the main solution file, which included all the SSDT (database), Integration (SSIS) packages, analysis models as well as the SSRS reports, then just removing all projects but the SSRS projects. I did this via PowerShell just because I don’t like extra manual steps. It’s a pain, when you add a new SSRS project you’d have to add it to Report.sln, which you’d probably forget and waste spend time debugging a silly mistake.

Creating the Report.sln reduce the build time down to just over 2mins (from over 50mins!), so it was worth doing.

Swollen SSISDB caused by Continuous Deployment

I recently had incident logged where the drive that hosted SSISDB database filled up on the UAT environment. We had SSISDB set to only keep 2 copies of the projects, however the additional copies are only removed when the SQL job is run – SSIS Server Maintenance job.

The SQL job is scheduled to run at 00:00 (midnight) every day, which is the default, which runs two steps

SSIS Server Operation Records Maintenance
TSQL Script: EXEC [internal].[cleanup_server_retention_window]

SSIS Server Max Version Per Project Maintenance
TSQL Script: EXEC [internal].[cleanup_server_project_version]

The problem is we do Continuous Deployment (CD), we use Octopus Deploy to automate our deployment process – it’s so effective we often do multiple releases a day. This in turned caused the log file to swell when a large number of old projects are removed, which results in the drive filling – despite the recovery model being set to simple.

The solution was simple, add an additional log file, located on another drive (I used the backup drive as it has tons of free space), run the job, truncate the log files, then remove the extra log file. Then it was just a question of running the two cleanup stored procedures post deployment.

Authenticating IoT

I was re-entering my password into our NowTV box in the bedroom when it occurred to me. Authentication sucks on the Internet of Things (IoT). The problem is you have a simple device with minimal extras. On the NowTV you have a basic remote that looks like this:

Can you imagine entering a password with a length of over 20 that’s a mixture of numbers, special characters and both upper and lower case characters? Now imagine changing that password. Regularly.

If you have to press the arrows 5 times per character, that’s over 100 presses! That’s insane!

So, what’s the solution? Well I think the technology already exists. And PayPal already had it patented. QR codes. Not sure if PayPal had thought about using it for IoT, I suspect they only thought about using it as a way of paying. So you have a QR code on the door then you scan it via the PayPal app, pay, then get your tickets sent to your Wallet to the club. Or scanning the code from the receipt to pay the bill.

For IoT, the device would generate a encryption key, this would be re-generated when the device is wiped, for example when it is resold, the device would then display a QR code, via a small E-Ink display or such, that would allow pairing (or such) between the device and a user account – via a internet connection to a web service. Unpairing the device from the user account would revoke the encryption key requiring the device to regenerate a new key (and a new QR code). However wiping the device would destroy the encryption key but wouldn’t revoke the key, this would cause some housekeeping to occur. Perhaps trying to unpairing first, however it shouldn’t be dependent on a internet connection to a web service in order to work. If the hard reset button is pressed, it must destroy the encryption key regardless if the unpairing fails or not. It must force this.

It’ll be interesting to see if PayPal expands then authentication  business beyond just for payments in the future.

Pretty time

This week I’ve been working on another new cube which had the time spent, which is in minutes. To pretty it up it wanted to be in days, mins and hours.

Here is a simplified version of the final output

prettytime

and the hopefully easier to understand MDX code

Adding the days option isn’t much more work – it does however make the code look alot more angry. Also you have the issue of what is a day – is it 24 hours? Is it the working hours, or is it less, is it the realistic billable time after you deduce admin time?

Dangers of distribution lists

Distribution list are amazing things, they allow us to send mail to our team without having to remember who exactly is in our team, this is especial true in this modern age where we are part of many different teams, sometimes without even realising. These lists can contain hundreds or even thousands of members and the latest NHS IT bug has left IT departments maybe wanting to double-check their distribution lists – hopefully before the security team comes a knocking.

Luckily, this is really easy in Excel 2016. Built into Excel 2016 is PowerQuery and one of the out-of-box connectors is Active Directory. With a bit of PowerQuery magic, you can easily pull this data out.

So we want to:

  • Connect to our Active Directory
  • List out all our Groups
  • Filter to email-able Groups
  • Include
    • the group owner
    • the members
    • any security \ restrictions on who can email the list

Note: If your unluckily enough to be running an old version of Excel, it possible to do the same thing in Power BI Desktop, which is a free download.

Below is the M code to do this:

For more help on how to enter the M code into Excel see my previous post.

UPDATE: Please note that this only works with Active Directory and doesn’t work with Azure Active Directory. Others have said this is possible via odata or such however I haven’t tried… yet.