Rubber Stamp from a Big Name or Real Security?

This week I challenged a client (and myself) to a test.  The client went out to get a vulnerability assessment of their SaaS web application from a North American firm who is recognized as one of the top IT security companies in the field.  Let’s call them “H”.  I was sympathetic when my client explained that the reason they picked H.  It was precisely because H was widely recognized and it would be easier to “sell” the result of the assessment to their downstream customers.  In other word H would provide a superior rubber stamp.

This bothered me a bit.  So I offered the client the following challenge: If I do a second vulnerability assessment on the same web application will I find more vulnerabilities than H can?

Long story short.  I found more vulnerabilities.

H found

–              1 High risk vulnerability

–              3 Medium risk vulnerabilities

–              1 Low risk vulnerability

–              5 Total

I found:

–              4  High risk vulnerabilities

–              5  Medium risk vulnerabilities

–              8  Low risk vulnerabilities

–              17 Total

Quantity isn’t everything of course.  I also supplied proof of concept code for key vulnerabilities that were not easily reproducible through the application’s GUI.  My client shared with me that H charged more than $15,000 for their work.  In this case, my work was pro-bono, but If I were to charge the client next time, it would have cost them approximately $3000 including preparation and follow-up.  If I crunch the numbers (vulnerabilities per $) I figure that in this case I was about 20x more efficient than H.

I have considered whether luck played any role in this difference.  When it comes to finding vulnerabilities I can’t deny that luck does play a role.  But luck without skill will yield absolutely nothing useful.  With a 20x difference in value, and the fact that H must have surely used a top notch analyst, with top notch tools, it still doesn’t add up well for H.

While I believe I successfully answered the question whether it’s better to use an IT security freelancer such as myself versus a “Big Name” in security, there is a big question that remains:

Do you want a shiny rubber stamp? Or do you need real security?

Gb`’b4&^faQ? -> Beep It Over

Remember when this happened?  You needed to tell someone over the phone a complicated string like: Gb`’b4&^faQ?

The conversation probably went something like this:

Alice: Upper case G, lower case b, slanted single quote, non slanted single quote …

Bob: What the hell are you talking about?!?!  What’s a slanted quote?!

Alice:  Why don’t I beep it over to you.  Ready?

Bob: <Gets his decoder ready> …. Ready!

Alice: <Presses a button><BeepSchrrrrrBeepBeep .. .modem kind of a noise>

Bob: ok got it  <sees Gb`’b4&^faQ?>

You can also beep things over with https://beepitover.com

 

Let’s encrypt is awesome

It used to be that if you wanted to encrypt using SSL you have couple of choices.  You can either self sign for $0 and get a narly warning message every time you or your users visit the site.  Or you could pay $60+/year for SSL certificate.

Let’s encrypt gives you a 3 month (indefinitely renewable) certificate for free.  The best part isn’t the cost though.  The best part is that the setup is so easy.  You can get SSL certificate for your apache site right from command line with 3 button presses.  You can’t do that even if you pay $200 for commercial SSL.  It will take you at least 30 minutes to do it.

There must be a catch right?  Not really, although right now my Blackberry doesn’t recognize the cert.  That will change with time though.  It’s only because let’s encrypt is too new.

Changing Ceph Configuration on all Nodes

One question regarding Ceph that comes up frequently is:  Where do you change ceph.conf file?  On admin node?  On each node manually?  Or will it magically replicate on it’s own?

The answer is that you change the the ceph.conf only in one place.  You change it on the admin node and use ceph-deploy to deploy the changes on all other nodes

For example: if you have a cluster consisting of n0, n1 and n2, you would do it like this

#login to admin node
cd my-cluster 
ceph-deploy --overwrite-conf admin n0 n1 n2

 

Handling Ceph near full OSDs

Running Ceph near full is a bad idea.   What you need to do is add more OSDs to recover.    However, during testing it will inevitably happen.  It can also happen if you have plenty of disk space, but the weights were wrong.  UPDATE: even better, calculate how much space you really need to run ceph safely ahead of time.  If you have to resort to handling near full OSDs, your assumptions about safe utilization are probably wrong.

Usually when OSDs are near full, you’ll notice that some are more full than others.   Here are the ways to fix it:

Decrease the weight of the OSD that’s too full.  That will cause data to be moved from it to OSDs that are less full.

ceph osd crush reweight osd.[x] [y]  

x is the OSD id, y is the new weight, be careful making big changes, usually even a small incremental change is sufficient

Temporarily decrease the weight of the OSD.  This is same as above except that the change is not permanent

ceph osd reweight [id] [weight]

id is the OSD# and weight is value from 0 to 1.0 (1.0 no change, 0.5 is 50% reduction in weight)

for example:
ceph osd reweight [14] [0.9]

 

Let Ceph reweight automatically

ceph osd reweight-by-utilization [percentage]

Reweights all the OSDs by reducing the weight of OSDs which are heavily overused. By default it will adjust the weights downward on OSDs which have 120% of the average utilization, but if you include threshold it will use that percentage instead

Slow VMWare performance iscsi tgt and ceph [Solved]

After a lot of head scratching and googling I finally discovered why my ceph performance was so slow compared to NFS when using iscsi tgs on my gateway.   I was getting only 0.1 MB/s compared to 90 MB/s that I was getting through NFS.  It turns out that ESXi had hardware acceleration (VAAI) turned on for it’s iSCSI initiator – apparently it’s something that isn’t compatible with tgt.  To turn it off I followed these steps

Turning off VAAI

I didn’t even have to reboot, or reload any configuration.  The effect was immediate jump in performance back to normal.