Dec 20, 2012

mongoDB Sharding

If I should have made some safe bets on the near future, I would choose two: Hadoop and mongoDB. 

There is a huge demand for both technologies and many players consider these technologies as a foundation for their future products.

Sharding?
MySQL Sharding was a major issue for large scale installations and it is the same for mongoDB large installations.

Back to Basics
mongo is pretty similar to a regular database, but it has two main advantages: 1) Software engineers love it as it can easily be used for object persist-ency and 2) it support unstructured objects (documents) that can easily store different objects based on the same virtual class.

mongoDB terms

  1. Database: database
  2. Collections: very similar to tables.
  3. Documents: very similar to rows. Yet, a document can be as flexible as a JSON document can be. For example, it may include 1 to many fields in the document itself.
  4. mongod: a mongoDB instance or shard.
  5. Chunk: a 64MB storage unit that stores documents.
  6. Config database: Chunks to mongos mapping directory.
Why use sharding?
  1. Support large dataset using commodity servers.
  2. Support high IO requirements using commodity disks.
What are mongoDB sharding features?


  1. Range-based Data Partitioning: a very similar method to MySQL partitioning. You should choose one or more fields (shard key) that sharding will be based on. You should choose a shard key according to the business logic, like splitting according to account id in a SaaS application.
  2. Automatic Data Volume Distribution: mongoDB will take care of the shards balancing by itself according to the chosen shard key.
  3. Transparent Query Routing: mongoDB takes care of queries map reduce to multiple shared by itself when a query does not match the shard key (very much like Hadoop).
Key Recommendations for mongoDB Sharding
  1. Sufficient Carnality: choose a shard key that can be split later to more shards if a database size is getting too large (exceeds chunk size).
  2. Uniform Distribution: choose a sharding key that will spread a in uniform distribution to avoid unbalanced design.
  3. Distribute Write Operations: if you have a billing system, prefer to shard according to account id rather than shard according to billing month. Otherwise, in a given day, probably only a single shard will be used.
  4. Query according to the shard key: if any of your queries will include the shard key, each of your queries will result in a single shard query. Otherwise, it will generate N queries (one per shard).
Technical Aspects for mongoDB Sharding
  1. Every sharded collection must have an index that its first fields are the shard key (use shardCollection for that).
  2. Chunk size default limit is 64MB
  3. When a chunk reaches this limit, mongoDB will split it to two.
  4. If chunks are not distributed uniformly, mongoDB will start migrating chunks between different mongos.
  5. Cluster Balancer is taking care of this process.
  6. Balancing can cause performance issues and therefore can be restricted to off peak hours (nights and weekends for example) using balancing windows.
  7. The shards mapping to mongos is saved at the config database.
  8. Replication should be considered as well  a complementary method.
Bottom Line
mongoDB brings to the table an out of the box sharding solution that can scale your operations. Now, you only need to analyze your needs and select the right solution for them.

Keep Performing,

Dec 13, 2012

MySQL Crash Course Presentation

In the last few weeks I lectured a MySQL crash course. The course topics covered almost all what is needed to make an initial ramp up when you get into MySQL: ERD, DDL, DML, installation, security, scaling, backup, Schema design, tuning, master slave and more...

The good news
I got a very good feedback from the students, so I decided to share with you the presentation itself:




Keep Performing,
Moshe Kaplan

Dec 9, 2012

How to use rsync for high availability environments?

What if...
  • What if I have a large number of web servers and I need to deploy the same code on all of them?
  • What if  I would like to enable high availability and redundancy for static user content such as images?
  • What if I want to to backup files to a central storage?

A Swiss knife for static content replication
rsync was considered for a long time as the best solution for static content and code replication  in environments that consist of large number of servers.

rsync has a simple protocol that replicates a directory (one or more) on a single server to other servers. This can be achieved in two different methods (like SCP that it is based on):

  • Push from the master to the slave: rsync [OPTION] … SRC [SRC][USER@]
    HOST:DEST
    
    
  • Pull from the server by the slave: rsync [OPTION][USER@]HOST:SRC [DEST]

Can I perform a change on the destination directory?
Please note that the rsync protocol analyzes differences between two directories, and therefore probably will not match cases when you want to change the content of the destination directory.

How should I authenticate?
Use one of the two options:

  1. Static user/pwd using sshpass for non interactive SSH based authentication.
  2. PKI authentication using on-the-fly keys generation or pre-generated keys
Master-Master replication
Like in MySQL, Master-Master replication can be achieved by a dual Master-Slave connections setup . Please consider to enable only one of these connections. Then, during a failover, disable the replication. Last, when you bring the master server back enable the other replication.


Note: you may consider using OpenStack Storage for these purposes as well, as it provides an out the box solution for high availability and redundancy that easily supports multi master out of the box

Keep Performing,
Moshe Kaplan

Oct 4, 2012

How Amazon SES Mailbox Simulator Makes Your System Better?

Amazon latest feature, Amazon SES Mailbox Simulator, is great!

With this feature you can email to simulator addresses. Each address triggers a specific type of behavior such as: a successful acceptance, hard bounce, "out of office" auto responder or complaints.

Why am I so excited from a feature that actually generates no direct business value to the end user?

At dSero, the Anti AdBlock Creators, we invest a lot in providing a high quality anti AdBlock solution to our clients in no time.
You can do that, only if you totally automate your software development and even more important: your quality processes.

Bottom Line
More automation and better coding enables you providing great product to your users in a better quality and lower cost.

Keep Performing,

Sep 5, 2012

How to Scale WordPress?

WordPress is so easy to install. Just 5 minutes and you have a working site...

But what happens when traffic increases and page-views soar?
That was the main subject of the presentation I had at WP TLV (The Israeli WordPress Community) first event. The event was hosted by Google and organized by dSero: The Anti Ad Block Creators (disclaimer: I'm a major stake holder of this firm).

The answer can be found in the attached presentation:



Bottom Line
WordPress fits small sites and large sites. You just need to make the proper architecture decisions and WordPress will scale to your needs.

Keep Performing,
Moshe Kaplan

Aug 29, 2012

Who takes care of non Sharded tables in Sharding?

The Challenge
As you may remember, Sharding is about splitting a database according to a key. Usually it is according to user id, client id or a phone number.
One of the major questions when we implement Sharding is what to do with data dictionary tables? The data dictionary tables are those tables that include static data that is being used by all shards.

The Options
In general there are two options:

  1. Have a single copy of the data dictionary tables and ask the application to take care of the complex queries.
  2. Have multiple copies of the same table and create a data distribution solution that will update all of these table copies.


Data Dictionary/Static Tables Sharidng Solution

The Elegant Solution
Create a new database that will serve as a master to all shards.
This database will include all data dictionary tables, and will serve as the single write instance for all data dictionary tables.
Once a record will be updated at one of these data dictionary tables, it will instantly be replicated to the all of the shards and will be available.

Bottom Line
Sharding may seem hard, that with simple and elegant solutions you can achieve your own scale-able solution.

Keep Performing,
Moshe Kaplan

Jul 30, 2012

MySQL Optimization and Scheduled Maintenance

Once in a while it is worth to take care of your database and verify it can provide the needed resources for your system.
After all, most of the data may be store in it...


Some Tasks I would Recommend You to Do
Please notice that this work may not be done manually, but can be automated. If you do so, you may be contribute it back to the community at your favorite open source project:

  1. Run MySQLTuner and look for issues.
  2. Take care of fragmented tables:
    1. You can find them using SHOW TABLE STATUS WHERE Data_free > 0\G and take care of them using OPTIMIZE TABLE table_name; 
    2. Or you can use mysqlfragfinder script for that.
  3. Verify RAM, disk and CPU utilization
  4. Check the slow log query
Keep Performing

Jul 24, 2012

What is the Best File System for MySQL?

Well the answer it depends.
Not sure? take a look at this Q&A at ServerFault

Keep Performing,
Moshe Kaplan

Jul 7, 2012

Good News from Amazon AWS for SEO and Newsletter Marketers

Amazon AWS latest announcement of Multiple IP Addresses for Amazon EC2 Instances support is a great news for SEO and Newsletter markers.

Why these are Good News?
Both SEO and Newsletter marketers require large number of dedicated IP addresses:

  1. Google tend to associate websites that are using the same IP address. Therefore, if a single site uses unethical SEO methods, you may want to avoid a ranking damage to other sites. Another reason is that linking methods are considered more effective if they come from external sites.
  2.  Major email suppliers around the world (Google's Gmail, Yahoo!, Yandex and Microsoft Hotmail) limit the number of emails per minute they accept from a single IP. Multiple IP addresses can help newsletter service providers better utilize their existing instances.
Bottom Line
Online services are getting better every time. It's time to utilize them!

Keep Performing,

Mar 14, 2012

How to Setup MySQL Replication in 11 Easy Steps

I've provided a link few weeks ago to a great complete guide for MySQL replication. Yet, since then I got many requests to provide a short and east guide for a new MySQL replication setup.

Well, Fasten Your Seat-belts
  1. Install a MySQL instance that will serve as a Master
  2. Install a MySQL instance that will serve as a Slave
  3. Configure the Master my.cnf file (located at /etc/ at CentOS) with the server id and the log file name: 
    1. [mysqld]
    2. server­-id = 1
    3. log­-bin   = master­-bin.log
  4. Configure the Slave my.cnf with the server id, reply logs and key configuration databases;
    1. server­-id = 2 
    2. relay­-log-­index = slave-­relay-­bin.index
    3. relay­-log = slave­-relay­-bin
    4. replicate-wild-ignore-table=mysql.%
    5. replicate-wild-ignore-table=information_schema.%
    6. replicate-wild-ignore-table=performance_schema.%
  5. For MySQL 5.6: Delete the auto.cnf from your MySQL directory slave> rm -rf /var/lib/mysql/auto.cnf
  6. Restart the MySQL daemons on both servers to apply the my.cnf changes.
  7. If both servers were just installed there is no need to sync their data files (since they  should have the same data files). O/w you should either stop the slave and master and copy the data files using SCP or perform just perform a mysqldump. Notice! Before copying the files, get the Master location describe below.
  8. Get the master location, so we can sync the slave to it:
    1. master> FLUSH TABLES WITH READ LOCK;
    2. master> SHOW MASTER STATUS;
    3. +-------------------+----------+--------------+------------------+
    4. | File              | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    5. +-------------------+----------+--------------+------------------+
    6. | master-bin.000013 |  1233453 |              |                  |
    7. +-------------------+----------+--------------+------------------+
    8. master> UNLOCK TABLES;
  9. Provide the slave with replication permissions:
    1. master> GRANT REPLICATION SLAVE ON *.* to `repl`@`%` IDENTIFIED BY 'slavepass';
    2. master> FLUSH PRIVILEGES;
  10. Setup the slave using the chosen user/password and the master location we found before:
    1. slave> CHANGE MASTER TO
    2.     ->     MASTER_HOST='10.17.16.253',
    3.     ->     MASTER_PORT=3306,
    4.     ->     MASTER_USER='repl',
    5.     ->     MASTER_PASSWORD='slavepass',
    6.     ->     MASTER_LOG_FILE='master-bin.000013',
    7.     ->     MASTER_LOG_POS=1233453;
  11. Now, start the slave and verify it's running: 
    1. slave> start slave;show slave status\G
  12. If everything is Okay, verify the replication really works by:
    1. Creating a table and inserting values to it in the master:
      1. master> CREATE DATABASE a;
      2. master> CREATE TABLE a.b (c int);
      3. master> INSERT INTO a.b (c) VALUES (1);
    2. Verifying that SLAVE/MASTER> SELECT * FROM a.b; return the same values in the master and slave;
    3. Dropping the database in the master: master> DROP DATABASE IF EXISTS a;
P.S If you are interested in implementing the replication using SSL, follow these instructions.

Bottom Line
Only few minutes and you got a running MySQL replication. Even these kind of tasks can be so simple and smooth...  and leaving you time to drink a cup of coffee!

Keep Performing,

Mar 6, 2012

Amazon Latest Price Reduction: News for Large Accounts and Linux based Users

Amazon announced today a price reduction on the EC2 instances prices. The price reduction does not include storage or traffic so you may expect that effective cost recduction numbers will be half on your montly bill.


On Demand Pricing Effects
Minor price reduction on the on demand instances: 5% on small instances, 10% on larger instances. Price Reduction Target: Traditional hosting offers.


Reserved Instances Pricing Effects
Major price reduction on the reserved instances: 20% effective reduction. Price Reduction Target: Traditional hosting offers.


Volume discounts
Finally, Amazon offers volume discounts. The volume discounts are focused on large accounts that spent at least $1M annually at AWS services. The volume discount are focused on the the reserved instances (the equivalent of traditional hosting). You can expect 10% reduction (5% off bill's bottom line) for $250K upfront bill ($1M annual AWS bill) and 20% reduction (10% of bill's bottom line) for $2M upfront bill ($8M annual AWS bill). Price Reduction Target: Traditional hosting offers.


Linux or Windows?
The price reduction increases the price differences between Windows based instances and Linux based. Price Reduction Target: Avoid price wars with Microsoft.


Why Amazon Made These Changes?
The price reduction signals that Amazon has a larger price pressure from Linux cloud providers rather than from Microsoft. It also signals that traditional hosting providers offers focus on large customers and larger servers accounts. These price reductions will help Amazon reduce larger customers migration out of its services.

Keep Performing,

Feb 13, 2012

MySQL Replication Complete Guide

This is probably my shortest post

Mats Kindahl MySQL Replication guide is short, concise and includes (almost) everything you need.


Keep Perfomring,
Moshe Kaplan

Feb 11, 2012

Do You Need More Disk Space for the MySQL Server at Amazon Web Services (AWS/EC2)?

If you deal with large databases, you will probably need more than the provisioned storage of the default Amazon AMI (8GB).


UPDATE: Since in many cases you may need to swap between the current disk and a new larger one, I've added in green comments regarding this case.


The following 10 steps guide will help you expand your disk meet your business needs:
  1. Create a new storage volume at the Amazon console (or using AWS API). Make sure it is big enough to accommodate storage, log files, bin files and future growth. Notice! verify the new drive is in the same availability zone as the target instance.
  2. Write down the mount name you chosen. For example: /dev/sdf.
  3. Mount the disk to your system according to Chris Dean post:
    1. sudo mkfs.ext4 /dev/sdf where /dev/sdf is the mount name you chose.
  4. Shutdown the MySQL daemon: sudo /etc/init.d/mysql stop
  5. Unmount the current disk umount /dev/sdf
  6. Move your current MySQL data directory to a temporary directory: sudo mv /var/lib/mysql  /var/lib/mysql.old or mount the previous disk to mysql.old by creating a new folder and changing the /etc/fstab as described bellow.
  7. Recreate the MySQL data directory and provide the needed permissions:
    1. sudo mkdir /var/lib/mysql
    2. sudo chown mysql:mysql  /var/lib/mysql
  8. Mount the new disk to the new directory:
    1. Add another line to /etc/fstab (sudo vi /etc/fstab): /dev/sdf /var/lib/mysql ext4 defaults 1 2
    2. Mount the disk: sudo mount /var/lib/mysql
    3. Verify everything is Okay with df
  9. Move the files from the temporary directory back to the new data directory sudo mv /var/lib/mysql.old/* /var/lib/mysql
  10. Remove the old directory: sudo rmdir /var/lib/mysql.old and unmount the old disk umount /dev/sdf
  11. Restart you MySQL: sudo /etc/init.d/mysql start
Note:
xfs today is the new kid in block w/ better performance w/ SSD disks and already stable.
Use the following commands if you choose it:
sudo apt-get install -y xfsprogs
sudo mkfs.xfs -f /dev/xvdf
vi /etc/fstab
/dev/xvdf /var/lib/mysql/ xfs defaults 0 2

Bottom Line
A combination of cloud knowledge, Linux sysadmin, MySQL expertise and careful work will help you go through the scale and growth challenges.

Keep Performing,

Feb 5, 2012

MySQL Replication Deadlock Detection and Recovery (The code inside)

The best practice to handle deadlocks (after eliminating them from the code) is recovering from them. Unfortunately, if you ever came across the "SHOW SLAVE STATUS\G" Error 'Deadlock found when trying to get lock: try restarting transaction', you probably know that MySQL Replication does not recover automatically from deadlocks.


Therefore, I implemented a Nagios plugin. This Nagios plugin detects the MySQL Replication deadlock and recover from it by restart the slave ("STOP SLAVE; START SLAVE;"). 


Technical
Brief: This a shell script (/bin/sh) that connects to a specified MySQL slave database, check the replication status for Deadlock errors + Automatically recover from the state by performing "SLAVE STOP; SLAVE START;"
Required parameters: host, user, password.
If no Deadlock is detected, "OK" is returned
O/w, a message is printed, slave restarted and Warning state is returned.


Bottom Line
Fast, Simple and very effective and now available from Nagios Exchange.


Download the MySQL Replication Deadlock Recovery Nagios Plugin.

Suggestions are welcome,


Keep Performing,
Moshe Kaplan

Jan 5, 2012

MySQL and Lucene: Server Side Cursors or Client Side Cursors?

What is the Default?
Client Side Cursors: this method returns the client the whole result set upon a SQL request.


What is Faster?
Client Side Cursors: once you get the result set back to the client, you can manipulate it and read through it using minimal communication with the server.


So Why Should I Choose Server Side Cursors?
If you have a very large result set (say 100GB and millions of records), it might not be wise to fetch all the record set and only then start manipulating it. Lucene is one of the most common use cases for it. 
Server Side Cursors are the solution in this case. Instead of fetching the whole result set to the client, the result set is being stored in a server's In Memory table. Upon request, the server returns to the client a defined number of rows. That way the client can start processing the results, while further fetch is done over the network.


What are the Limitations?

  1. Server Side Cursors support forward only result sets (you must set in JDBC for example fetchSize="NUMBER" resultSetType="FORWARD_ONLY") .
  2. MySQL Server Memory. This method requires a lot of memory. Fortunately, if you don't have enough memory, MySQL will automatically use MyISAM table to store the temporary table.

How should I Configure It?
Client Side Cursors need 0 configuration since it's the original behavior of MySQL.
The Server Side Cursors feature was added at MySQL 5.0 and is nowadays considered stable. Yet, you will have to enable it in your connection definition, and while fetching 


Can I Use Them Both on the Same Connection Pool?
Yes. Server Side Cursors are used only if two terms accomplished: 1) Server Side Cursors are enabled (useCursorFetch) and 2) setFetchSize is positive (>0). Therefore, you can enable Server Side Cursors by setting setFetchSize to positive value and set fetchsize to 0 to keep using Client Side Cursors. If you are tired of setting setFetchSize to 0 every time, just set defaultFetchSize to 0.


Bottom Line
Now, you have the tools and you can better use MySQL for your cloud highly scalable system.


Keep Performing,
Moshe Kaplan

ShareThis

Intense Debate Comments

Ratings and Recommendations