Arbor Peak Flow anti-DDoS device saves another network

This is the kind of network graph I love to see. I won’t get into the specifics of the device since you can see that on their website (Arbor Networks) but suffice to say that the green line of incoming traffic is much more sane now. I’ve left off the graph legend to protect the innocent, but it’s a significant improvement in the 100s of Mbp/s range.

Arbor Peak Flow * graph generated by Observium

Tagged , , ,

How to fix the Percona repo failure when installing Percona Toolkit

Here’s a solution to the not-so-long-standing issue of the Percona yum repo being broken for the CentOS 6 x86_64 version of the Percona-toolkit package. The repo listing is reporting an older version of the RPM which is not available on the site, so to fix this you just have to download the newer file and tell yum to add it locally. The side benefit is that you can use Yum to manage the RPM without adding the Percona repo, since the default settings for their repo could/have/had caused conflicts with Base Repo versions of MySQL packages; the Percona repo instructions set ‘enabled=1′ — not a great idea if you’re not setup to use the Yum priorities method of repo weighting.

So, if you see this after installing the repo via the instructions on their site:
Downloading Packages:
http://repo.percona.com/centos/6/os/x86_64/percona-toolkit-2.1.9-1.noarch.rpm: [Errno 14] PYCURL ERROR 22 – “The requested URL returned error: 404″
Trying other mirror.

Error Downloading Packages:
percona-toolkit-2.1.9-1.noarch: failure: percona-toolkit-2.1.9-1.noarch.rpm from percona: [Errno 256] No more mirrors to try.

The solution is as follows and is only two simple commands:
wget http://repo.percona.com/centos/6/os/x86_64/percona-toolkit-2.2.1-1.noarch.rpm
yum install ./percona-toolkit-2.2.1-1.noarch.rpm

Or you could even be so bold as to combine it via a seamless call without downloading locally, but I didn’t bother.

Tagged , , , , ,

Cloudflare, now offering to be your Single Point of Failure

There have been many articles about the downtime issue with Cloudflare last week, so I won’t get into the technical details of that. However, there’s the fine print to remember. Consider this a subtle reminder that core Internet infrastructure services like Cloudflare’s DNS-based “Always Online” caching and packet inspection security services do not come with Service Level Agreements even at the “Pro” account level. Even with a Pro account you are paying for a service with no uptime guarantee and you must only hope that it resolves your sites the majority of the time. This is fine, this is what the contract says: no SLA unless you pay for the Business account. An odd naming convention given that most Professionals are using their websites for business and would want the SLA, but I digress.

So, the SLA is not really the issue if you look at the architectural alternatives to building an architecture that desires availability when your primary and secondary DNS servers potentially going offline. The typical design involves using more than one and certainly more than two DNS servers for your domain so that your domain addresses will still resolve if the primary and even the second go offline. Typically these servers will be on separate subnets and even in separate geographical regions so that events like tsunamis and dataceneter fires do not take out both your primary and secondary name servers; so there are options for a third and fourth resolver; but not with Cloudflare.

Cloudflare limits the user to only using their DNS servers for your domain – of which they only provide two resolvers, not three or four like most DNS services. So if you wish to have a third or fourth name server entry to ensure that even if the primary and secondary Cloudflare DNS servers go offline, well sorry, you cannot do so. Cloudflare will disable your domain in their system if you use any DNS entries that are not their own – which includes a third or forth setting. So now you have your “Professional” websites using a DNS and security service that has no SLA but which you are paying for “Professional” level services. If your “Professional” grade sites go offline because Cloudflare botched the router upgrade or was hacked, you’re SOL and you do not get downtime credits, sorry. You can’t even design your architecture to resolve with alternate name servers or they will disable your domain. So if Cloudflare ever goes offline your sites will go offline with them and there are no alternatives. If you use Cloudflare then their service becomes your Single Point of Failure.

I am not one to create drama but this is an issue that none of the other users of Cloudflare “Pro” account users that I’ve talked to were aware of. So, here is a recent email exchange with Cloudflare regarding a credit for having caused all of my sites to be offline on more than one occasion — this is not limited to the recent event with their routers.

Cloudflare: “I’ve reviewed your account and note that you currently have 3 Pro subscriptions with us. At this time we do not offer a guaranteed level of service or SLA for our Free or Pro plans… We are also investing a great deal of time and resources to ensure the resiliency of the network even in the event of localized failures that may happen from time to time.” — So not only is there no SLA but there could be localized failures from time to time that you also do not get credit for; that explains the monitoring failures for some of my sites that are in the same rack and some even running on the same servers but only the Cloudflare enabled domains are shown as being offline at the same time as the others being available.

My response to Cloudflare. “I will not keep Cloudflare running for my sites. There are many reasons but another one has recently made itself know to me when I decided to add tertiary and quaternary DNS servers, yet I run into the following technical limitation that precludes my ability to rely on Cloudflare for my back-end infrastructure domain: if I want to specify a 3rd or 4th DNS server as a backup resolver (like Route53 or my own servers running PowerDNS for example) then the Cloudflare system complains and disables my domain. I understand why the system is designed this way — you want all traffic going through the CF system so that the features are executed and so forth. However, in the event that an issue occurs like the previous outage then there is no fall back for users/systems to resolve my domain via an alternate DNS system. I am limited to two Cloudflare DNS servers and nothing more.

The way that the DNS requirement is setup makes Cloudflare an all or nothing solution – you either use Cloudflare for the domain or you do not. And, as 785,000+ sites experienced, this makes Cloudflare (no matter how resilient and improved after this incident) a single point of failure that system engineers and architects cannot design failover services around.

This is the second time that I have had issues with Cloudflare services not working correctly. The first was when one of my servers went offline and the “Always On” feature didn’t do anything, the site was not kept online via cache even though there had been plenty of time for the crawlers to get the content (which is static unchanging, non dynamic, non-database driven = simply a front page that is supposed to load fast and act as a click portal to our primary systems).

And now I have been seeing users connect to my site from countries that I have setup in the block list. I have a number of ‘trouble’ countries configured in Cloudflare to disallow access to my site yet these users are connecting anyway. Clearly the country blocking feature is broken as well.

I want to use Cloudflare. I want to love it. I want to tell everyone I know how great and useful it it. But after six months of using it on several sites it has done nothing more than cause me a lot of time trying out different configurations and wondering why feature x/y/z isn’t working as stated. Then there have been the outages from human error and incorrect ITIL process adherence.

So I will be setting up some alternate caching servers at different datacenters and moving some of my content onto a CDN. Cloudflare has failed and I am tired of wanting to like it.”

Tagged , , , ,

Building a MySQL Private Cloud: Step 1

Building clusters is usually a fun time. Here’s one of my setups at the Equinix LAX1 facility that is being used for VPN services, OpenVZ clustering, and general RADIUS and MySQL clustering integration. Once the clustering design is finalized, it’s still in flux state while I try out different setups, I’ll post some physical+logical architecture diagrams to show “How to Build a Fault Tolerant Infrastructure for Virtualized MySQL NDB Cluster + Python-based VPN systems.” Stay tuned for more.

LAX1-rack-front

Tagged , , , , ,

OpenVZ and Amazon S3: how to solve the dreaded connection throttle failure

Sometimes we encounter odd application responses that seem to make no sense. One of these such issues is related to running virtual server instances (OS Containers not Para-Virtualized VMs) and attempting to back up their data to Amazon’s S3 cloud storage. For moderately sized virtual machines running MySQL databases or Python/PHP based websites and code repositories this can be an inexpensive, quickly provisioned, and easy way to provide disaster recovery backups in numerous geographic locations, since we generally want DR content to be located in a physically distant location. Nevertheless, we can encounter errors if using an S3 mount in a distance location from our server if the timezone/sync data is incorrect.

The commonly seen error is as follows – and it doesn’t give much information for troubleshooting and resolution.

WARNING: Upload failed:  ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.00)
WARNING: Waiting 3 sec...

The solution is seemingly unrelated to any network related or file-system settings on the virtual machine or the host server. It has to do with running S3 storage buckets in different time zones than your server and not having the system sync’d to NTP pools. So, the solution for Redhat/CentOS/Fedora/Scientific (for other Linuxes just replace the package management commands as needed):

First we have to enable the ability for the OpenVZ container to utilize NTP. Add the following line to your /etc/vz/conf/101.conf file (where 101 in this example is the ID of your own container, which you can find via the command “vzlist”).

CAPABILITY=" SYS_TIME:on"

Then restart the container(s) to get the setting to take and login to the container. You can either SSH or enter the container from the main host.

$ vzctl restart 101
$ vzctl enter 101

On the VM itself, install ntpdate package to be able to sync time data.

$ sudo yum install ntpdate

Sample ntp.conf file for NTP pool servers on CentOS 6.3. There are plenty of other configuration settings but these are the basics. This file goes on the VM server, not the host server.

$ sudo cat /etc/ntp.conf
driftfile /var/lib/ntp/drift
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1 
restrict -6 ::1
server 0.centos.pool.ntp.org
server 1.centos.pool.ntp.org
server 2.centos.pool.ntp.org
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys

Restart the ntpdate service on the VM to sync to the pool.

$ sudo service ntpdate restart
ntpdate: Synchronizing with time server:                   [  OK  ]

Add a cron job to the VM (either in /etc/crontab or via “crontab -e”) for automatic ability to sync the time every day.

# sync date/time with ntp pool
05 01 * * *	root /usr/sbin/ntpdate 2>&1 | /usr/bin/tee -a /var/log/messages

Now you can run S3 backups with throttling errors. Done and done. No more errors.

Tagged , , , , ,

[updated] Free book February returns – Get a copy of the InnoDB Quick Reference Guide

This month is a special month. It’s not because of President’s Day or even the exciting day where we revel in groundhogs. No, this month is special because the free book give-away is happening again. This is where you, the reader, gets to win something free for doing nothing more than posting a comment saying that you want a copy of my recently published book – The InnoDB Quick Reference Guide from Packt Publishing. The book is a great reference for DBAs, PHP, Python, or Perl programmers that integrate with MySQL and want to learn more about the InnoDB database engine.

So, all you have to do is post a comment here saying that you want a copy and write out a single (or more) sentence about how you use InnoDB in your development or production environment. At the end of the month two readers will be chosen via a random list sorting script that I’ve whipped up for just this purpose. You will then get an email from the publisher who will send a brand new e-copy of the book free of charge. It’s that simple. Free book February! Comment now!

Update:
Here are the winners of the book contest for the InnoDB Quick Reference Guide
Matthew Bigelow — who will be using the book during his upgrade of a medical services database architecture.
Erin O’Neill – who will be raffling the book at the upcoming http://www.sfmysql.org conference: register for the conference now and you’ll have a second chance to win a copy of the book!

Tagged , , , , ,

The InnoDB Quick Reference Guide is now available

I’m pleased to announce that my first book, the InnoDB Quick Reference Guide, is now available from Packt Publishing and you can download it by clicking here. It covers the most common topics of InnoDB usage in the enterprise, including: general overview of its use and benefits, detailed explanation of seventeen static variables and seven dynamic variables, load testing methodology, maintenance and monitoring, as well as troubleshooting and useful analytics for the engine. The current version of MySQL ships with InnoDB as the default table engine, so whether you program your MySQL enabled applications with PHP, Python, Perl or otherwise, you’ll likely benefit from this concise but comprehensive reference guide for InnoDB databases.

Here are the chapter overviews for reference:

  1. Getting Started with InnoDB: a quick overview of core terminology and initial setup of the testing environment.
  2. Basic Configuration Parameters: learn about the most common settings and prerequisites for performance tuning.
  3. Advanced Configuration Parameters: covers advanced settings that can make or break a high-perfomance installation of InnoDB.
  4. Load Testing InnoDB for Performance: learn all about general purpose InnoDB load testing as well as common methods for simulating production workloads.
  5. Maintenance and Monitoring: covers the important sections of InnoDB to monitor, tools to use, and processes that adhere to industry best practices.
  6. Troubleshooting InnoDB: learn all about identifying and solving common production issues that may arise.
  7. References and Links: informative data for further reading.
Tagged , , , , , , , ,

General: new site theme based on Twitter Bootstrap

Just a quick note to say that the site has been updated to a new theme which is based on the super awesome Twitter Bootstrap UI framework. To make life easier, since this site is also using WordPress at the core, I’ve made use of the WordPress Bootstrap plugin which allows for very simple integration. However, that wasn’t enough because the Bootstrap plugin comes with rather basic and boring generic styles; so I added the plugin for Google Font support and then modified the CSS accordingly.

You will also notice that the site is undergoing some reorganization of categories and content tags. This should help clean up search results as well as general information sorting. I’ve removed the sidebar widget for category listings in favor of a top-nav driven menu that utilizes Bootstrap menu elements. The menu is focused on the primary areas of the site’s content: MySQL topics, programming (python, php, javascript,etc), as well as the numerous book reviews, and of course the code repository listings.

To wrap up all of these changes I’ve also featured a number of former projects, primarily python based, which have been imported to their new homes at BitBucket. You can see the various projects listed in the Projects menu at the top-nav bar.

Tagged , , ,

Reviewed: KnockoutJS Starter

Those new to the KnockoutJS world do not know of its beauty and simplicity in offering rapid development of javascript based web-applications and that’s just a shame; what we have in KnockoutJS is a framework of tools that allows a web developer to quickly and easily write MVVM (Model View ViewModel) based applications without the hassle of writing core services or dealing with some of the more complex but similar MVVM frameworks. So on that topic, I present “KnockoutJS Starter” by Eric M. Barnard. The simply fitting tagline of “Learn how to knock out your next app in no time with KnockoutJS” is true – it’s one of the quickest reads I’ve seen on the framework that gets the user from zero to working application in a very short amount of time.

The book starts with a healthy introduction to the necessary technologies: KnockoutJS and the MVVM architecture. After a general how-to about downloading and getting the KnockoutJS libraries up and running, we go into the basic sample. Things work and make sense, all good. Then it’s on to creating a more involved sample application which requires topics like: Creating a model, Creating a view, creating a ViewModel, using/managing/add/edit arrays, and data bindings; in essence it’s all you need to know about how to work with KnockoutJS.

Further pages are given to the more advanced topics like Subscribables, Observables, Utilities, Bindings handlers, etc. These pages are an absolute necessity for getting anywhere useful with the framework and the topics are covered very well; they aren’t simply explained from a high level view but there are rich code samples and discussions about how these framework elements are incorporated into use. Occasionally we’re given “little known points” that only someone who has been using the framework would be aware of; very useful. The final section of the book is dedicated to reference: site links, tutorials, community, and various other resources that an aspiring developer would benefit from.

Overall KnockoutJS Starter is a well rounded resource for the MVVM framework and it will always have a place on my shelf for reference material to what is surely going to be a very popular development model. Packt has the book available here: http://www.packtpub.com/knockoutjs-starter/book

Tagged , , ,

Super Python: three applications involving IRC bot master, MySQL optimization, and Website stress testing.

In my ongoing efforts to migrate my fun side projects and coding experiments from SVN to Git I’ve come across some of my favorite Python based apps – which are all available in their respective repos on BitBucket, as follows:

IRC Bot Commander

  • What it does: it’s an IRC bot that takes commands and does your bidding on whichever remote server the bot is installed on.
  • How it does it: the bot runs on whatever server you install it on, then it connects to the IRC server and channel you configured it to connect to and it waits for you to give it commands, then it execs the commands and returns the output to your IRC chat window.

MacroBase – MySQL Analytics

  • What it does: Offers advanced tuning reports via analysis of nearly all MySQL global variables + statistics and then generates a tuning report that tells you the optimal setting for different buffers, logs, etc. Think of it like the MySQL Tuning Primer but with far more reach. Think of it as the command line version of the reports that Kontrollbase outputs.
  • How it does it: in addition to connecting to MySQL and reading global variables and status (and information_schema) it connects to the OS’s SNMP daemon and analyzes system level metrics to use in the vast number of equations and formulas required for the report.

Site Strangler – HTTP Smash
High Level Feature List (note that this application can be controlled by IRC Commander if your bot runs on the management server)

  1. Highly scalable HTTP load generation application for simulating high traffic
  2. Allows geographically distributed nodes to simulate global user-base traffic
  3. Enhanced job management via inter-process queue system
  4. Encrypted node communication via direct SOCKET protocol w/ key exchange
  5. Optional randomized query string url generation – simulate dynamic calls
  6. Multi-threaded operation on server and client
  7. Performance data reporting for url connection timing
  8. Configurable options for controling total hit quantity across nodes
    • per-node thread concurrency
    • per-thread connect cycling
    • per-connection delay timing
    • optional randomized connection timing
Tagged , , , ,