Dear Rosetta@Home participants,
We — like many of you who have contacted us — have been extremely
frustrated by the long project downtime. We (bakerlab.org) had a domain
name registration verification lapse, and our registrar (dotster.com)
and ICANN turned off DNS for bakerlab.org. We went through the steps to
getting it verified again Monday afternoon. What should have been a
quick procedure is now stretching into 4 days. We apologize for the mass
emailing which we have tried to keep to a minimum throughout the course
of the project, but this is an extraordinary situation and we have no
other way of reaching all of you now.
Since being down we estimate that we have lost a total of around 3.1
million computing hours and continue to lose around 540 computing hours
We greatly appreciate your help and contributions! With your help, we
have been making rapid progress in our research which has been
attracting considerable attention, for example:
physics/origami-revolution. html (the 8
minute segment on our work starts at 20:30)
which is titled “Big data (and volunteers) help scientists solve
hundreds of protein puzzles”
Thank you very much for your continued contributions to and support of
WordPress is an amazing blogging platform. However it does require a fair amount of love. Despite Mythic Beasts managing a large portion of my stack (hardware, OS, Apache, PHP, MySQL) and WordPress having automatic background updates I still find myself logging in and finding pending updates for WordPress.
The solution was WP-CLI. With the shell add-on, I SSH onto my account, then
What the above script does is download WP-CLI, grant it execute permission, then downloads my script and again, gives it execute permission.
Then its question of create a cron job using crontab, this can be done by running crontab -e then doing something like: (this runs the script every 15mins and redirects the output to a log file that gets overwrite each time it runs*)
> overwrites the file. >> Appends. I’ve not used append as I don’t want to deal with it growing and really I only want the last run details. Still your mileage may vary.
Below is the final script that executes WP-CLI
Been trying to find out the following from my local council:
- How many resident parking permits have been issued
- How many resident parking bays are available
- What exactly has the money been spent on in terms of fines issued.
To say they are being evasive is an understatement.
So far, I’ve got, the response of:
- This info is fluid as it is not only static residents who can apply for waivers within a given zone but also agencies who provide medical needs and other statutory services.At this time we have no plans to publish this level of information on our website
- We don’t hold this information, we don’t have defined bays, just areas – which I followed up with, well you must have a min size. FYI normally 5m is allowed per vehicle parked at the end of a bay and 6 m for those inside. Still waiting for them to A) publish the map so I can do the math, B) do the math and give me a number
- (A) making good deficits
(B) paying for the provision or maintenance of off street parking
(C) If (B) is considered unnecessary then the provision or operation of facilities for public passenger transport services, highway or road improvement projects within the local authority area or environmental improvements in the local authority area.
Clear as mud. So if its been in place for 10 years and they bring in £400,000 a year in fines that’s £4 million been spent on? What? I assume the cost they charge for issuing the permits to residents counters A and B. Please Lord let it not be those stupid Real Time Passenger information*.
* I like the idea Real Time Passenger information, just not the solution SCC purchased. It runs on an out-of-date operating system that crashes. If it was me, I’d got the Uni to build something using a Raspberry Pi and got the local schools involved (would have looked pretty cool on the children’s CV)
— Matt Smith (@matt40k) November 8, 2016
So once again we have another major security leak. You can read about it, here and below is the email CEO Matthew Prince wrote to customers:
Dear Cloudflare Customer:
Thursday afternoon, we published a blog post describing a memory leak caused by a serious bug that impacted Cloudflare’s systems. If you haven’t yet, I encourage you to read that post on the bug:
While we resolved the bug within hours of it being reported to us, there was an ongoing risk that some of our customers’ sensitive information could still be available through third party caches, such as the Google search cache.
Over the last week, we’ve worked with these caches to discover what customers may have had sensitive information exposed and ensure that the caches are purged. We waited to disclose the bug publicly until after these caches could be cleared in order to mitigate the ability of malicious individuals to exploit any exposed data.
In our review of these third party caches, we discovered data that had been exposed from approximately 150 of Cloudflare’s customers across our Free, Pro, Business, and Enterprise plans. We have reached out to these customers directly to provide them with a copy of the data that was exposed, help them understand its impact, and help them mitigate that impact.
Fortunately, your domain is not one of the domains where we have discovered exposed data in any third party caches. The bug has been patched so it is no longer leaking data. However, we continue to work with these caches to review their records and help them purge any exposed data we find. If we discover any data leaked about your domains during this search, we will reach out to you directly and provide you full details of what we have found.
To date, we have yet to find any instance of the bug being exploited, but we recommend if you are concerned that you invalidate and reissue any persistent secrets, such as long lived session identifiers, tokens or keys. Due to the nature of the bug, customer SSL keys were not exposed and do not need to be rotated.
Again, if we discover new information that impacts you, we will reach out to you directly. In the meantime, if you have any questions or concerns, please don’t hesitate to reach out.
Co-founder and CEO
So lets be clear
- …the greatest period of impact was from February 13 and February 18 with around 1 in every 3,300,000 HTTP requests through Cloudflare potentially resulting in memory leakage (that’s about 0.00003% of requests).
- Only customers who use Automatic HTTPS Rewrites, Server-Side Excludes and Email Obfuscation were affected.
- …data that had been exposed from approximately 150 of Cloudflare’s customers across Free, Pro, Business, and Enterprise plans
- CloudFlare is SaaS
- Security hole was completely closed in 7hrs 11mins from being Tweeted about an issue
- Security hole was mostly closed off in 1 hr 8mins
- Production fix and service restored in 3 days 10 hrs 9mins
- People are jumping on the problem making it sound worse then it was (don’t get me wrong it was bad, but no where as bad as Heartbleed, Heartbleed, still IS a problem)
- CloudFlare have been very transparent
…And this is why I review code I take on-board, regardless if it works and advise others to review my code
Caching always seems to cause problems, still, we can’t have it all. Today’s caching problem was to do with Redgate SQL Prompt, a really amazing plugin that helps you write better SQL code. The problem with it is the cache of database object metadata was out-of-date. I had updated a table so when I typed select * then press TAB to expand the * into a list of columns, I got the old names. Luckily the fix is easy. Refresh suggestions.
As the screenshot shows, its either SQL Prompt > Refresh suggestions or just Ctrl + Shift +D.
I was re-entering my password into our NowTV box in the bedroom when it occurred to me. Authentication sucks on the Internet of Things (IoT). The problem is you have a simple device with minimal extras. On the NowTV you have a basic remote that looks like this:
Can you imagine entering a password with a length of over 20 that’s a mixture of numbers, special characters and both upper and lower case characters? Now imagine changing that password. Regularly.
If you have to press the arrows 5 times per character, that’s over 100 presses! That’s insane!
So, what’s the solution? Well I think the technology already exists. And PayPal already had it patented. QR codes. Not sure if PayPal had thought about using it for IoT, I suspect they only thought about using it as a way of paying. So you have a QR code on the door then you scan it via the PayPal app, pay, then get your tickets sent to your Wallet to the club. Or scanning the code from the receipt to pay the bill.
For IoT, the device would generate a encryption key, this would be re-generated when the device is wiped, for example when it is resold, the device would then display a QR code, via a small E-Ink display or such, that would allow pairing (or such) between the device and a user account – via a internet connection to a web service. Unpairing the device from the user account would revoke the encryption key requiring the device to regenerate a new key (and a new QR code). However wiping the device would destroy the encryption key but wouldn’t revoke the key, this would cause some housekeeping to occur. Perhaps trying to unpairing first, however it shouldn’t be dependent on a internet connection to a web service in order to work. If the hard reset button is pressed, it must destroy the encryption key regardless if the unpairing fails or not. It must force this.
It’ll be interesting to see if PayPal expands then authentication business beyond just for payments in the future.
This is absolutely amazing, Microsoft, running SQL Server, in Linux Redhat (RHEL)!
Cut-down version, I’m not working with SIMS anymore and I can’t pass on my work without them getting a large bill from Capita for using the business objects.
Years and years ago I was working on a SIMS help desk at a Local Authority (LA). I had a school log a call asking about importing email addresses. Like many schools this school had just purchased a messaging service that allows sending emails and texts, the problem was they needed them in SIMS .net, as you can imagine the idea of manually retyping 1,000+ email addresses was rather daunting. So, after putting a call into Capita Partner Team asking for the API documentation I spend the evening building a simple import process. The next day I phoned the school to give them the good news and we imported their email addresses. A few days later a few more schools asked the same question and I decided to continue spending my evenings expanding (it imports email, telephone and User defined fields from XML, CSV and Excel spreadsheets) and refining the tool. I’ve always developed in my own time, I’ve never claimed overtime etc and I’ve never charged a penny for using it.
SIMS Bulk Import uses the SIMS .net business objects (API), its uses the the SIMS .net DLLs. Its the exact same .NET libraries that the SIMS .net library uses.
At the time we were in a partnership with BT, who had a procedure for raising new business ideas. The process involved working out what type of change it was – in this case it was a efficiency saving. To put it simply this means you can’t charge more then what you can save, in this case labour.
Capita charge the school. Then they charge the partner for write access, which the partner then charges on the school so the schools end up paying twice for support on one thing.
When you take into account the Capita charge, the handling of money costs (invoicing, collecting and chasing etc), the helpdesk costs, the fact people expect a certain standard, ie you’re going to have to invest alot more in terms of development – including documentation. It just doesn’t make sense. It’s actually more cost effective to hire a temp to manually key in all those details!! Nuts!!
So at this point I basically decide I’ll give it away. I really didn’t want to see my hard work go to waste. I basically managed to wangle it into the public domain without finding a massive Capita bill land on my desk!! Its been in a wild for many, many years (with Capita knowledge) with a grand total of ZERO corrupt SIMS databases. I find this quite an achievement. Don’t get me wrong, SIMS Bulk Import has failed a number of times, but it’s never left your SIMS system in a worse state (unless you’ve done something stupid like successfully imported the same email address to every pupil in SIMS!)
A few years later I switched teams and stopped working with SIMS .net. SIMS Bulk Import has been stable for a while and I’ve had a few commits from individuals. I’m now at the stage where I’m going to leave the LA and go work somewhere else and its unlikely to have a SIMS .net license let alone API access. I needed to find a new owner for SIMS Bulk Import. Anyone who’s talked to me would have described SIMS Bulk Import as the poor man’s Salamander Active Directory, it is simply put the next logical step if you work out how you’d improve SIMS Bulk Import, its what I would have done to make SIMS Bulk Import into a commercial product. Luckily Richard agreed to take it on and even help me recover some of the costs of SIMS Bulk Import. Before you shoot off to SalamanderSoft to download it, let me save you the disappointment. Capita has said they would charge them a license fee for each school using it, ie it wouldn’t be free. At this point I guess you can see where I’m going with this? SIMS Bulk Import isn’t worth paying for and Richard already has the expanded version (that IS worth paying for).
So in short, your options are:
- Except you can’t bulk import anymore and get typing or copy and pasting \ hire a temp
- Look at automating your processes and buy Salamander Active Directory
- Wait for Capita to come out with their own product – I suspect they will and charge. They do a limited SQL script that injects the records directly into the database, ironically bypassing the business objects (but hey, they support it so its all good right?)
- Switch MIS who doesn’t charge for partner access \ gives you bulk import routines (Eduware Network has a list of MIS suppliers)
Option 5 is carry on using it. I’m sure even with me saying, no, don’t do that (and I’m sure Capita will agree) – someone will. So a few comments.
Make sure you have the latest (or should that be last) version – its 2.5.0
It should be digital signed, think of it as SSL for your applications. If you right-click on SIMSBulkImport.exe or the .msi installer you should see an extra tab – Digital Signature and you should see its signed by me – Open Source Developer, Matt40k. If your copy doesn’t have the signature its possible code has been injected and it is unsafe.
You should be OK as it uses whatever version of SIMS API you have installed, so it’ll just break one day and by break I mean it won’t let you login or will just give you all import failed (if it does fail in terms of SIMS database corrupt then some thing terrible has happened with Capita API, but I digress).
A few people have forked my code, for whatever reasons, I would just point out that the most up-to-date fork is 80 commits behind mine. That’s a fair amount of work that missing from those forks.
Anyway, hopefully you’ve found it useful whilst it last.
With rise of the internet high street businesses are rushing to adapt so they don’t become a thing of the past. However in their rush are they being a storm trooper? Are they missing the point?
First, lets look at two successful hybrid business. Hughes, a electrical store based in East Anglia and Argos, a British catalogue retailer operating in the United Kingdom. If we focus on Argos as its more well known business.
If you order online you can see the in store stock. You can then go in store and collect it. High street shops have become showrooms and mini warehouses.
Compare this to Game and Staples, these stores compete against each other. Online vs instore. The prices are different, online prices are cheaper. When you order online to collect in store, they post your order to the store. You order comes from the central stock, it can’t come from your local store.
It breeds resentment between local stores and corporate HQ. It leads to poor customer service.
I recently cancelled an online order with Staples because they claimed to do next-day delivery, I arrived in the store the next day only to find my order hadn’t arrived. So I phoned up, cancelled it and bought it instore. So it cost them
- to answer my phone call, because I couldn’t cancel via the website
- to post my order
- to receive my order, instore
- to post back my order to their central warehouse
- to answer my phone call chasing the refund
- to answer my phone call chasing the refund, again
- to answer my tweet, again chasing the refund
- to then refund my money (not sure if they have to pay a card charge)
All these actions have a staff cost, after all, staff don’t work for free.
— Matt Smith (@matt40k) September 10, 2016
and give you…
Like the IntelliSense… or the fact press table next to the * expands it out into the list of columns
The fact it re-opens the tables you had open the last time you were in SQL Management Studio, that it has a history. I’d like to say I never close the wrong window, especially when I haven’t saved them, but I do. Luckily with SQL Prompt it allows me reopen it.
It even stops DBAs going to jail :p
I would literally kill for : GROUP BY EVERYTHING , ie group by everything im not aggregating.
— Dave Ballantyne (@davebally) September 26, 2016
— Matt Smith (@matt40k) September 26, 2016