Friday, February 16, 2018

A Tale of Three Computer Conferences, Two Communities

Three conferences in three weeks! FOSDEM, SunshinePHP, and PHP UK are three excellent conferences that this year are back to back to back. 

FOSDEM is to the computer world what Renaissance Fairs are to those who have their own maces and armor.  FOSDEM is held on the campus of the Free University of Brussels, there is no registration -- pre, onsite, or post -- and they attempt to guess attendees by MAC addresses on devices that connect to network.  No tickets, no badges, and no reserved seats but FOSDEM is free to attend.  Rooms are requested by various groups including my MySQL Community Team partner LeFred for the MySQL ecosystem. The MySQL and Friends Devroom was packed from early morning to evening with engaging 30 minute presentations from a number of companies.  This show in the last few years has become one of the most important technical shows on the MySQL Community Team Schedule. LeFred and the presenters did a tremendous job of putting together amazing talks for the MySQL Community.

SunshinePHP is held in Miami and organized by the amazing Adam Culp.  He and his team have an amazing knack of pulling fantastic talks together into a great show.  Be advised that this is a show where you can go from airport to hotel for the conference and then return to the airport at the end without ever leaving the venue.  I spoke on MySQL 8 and received a lot of feedback that I used to update my presentation for the next show. 

And the next show is PHP UK.  The PHP Community is very strong, supportive, and radiant in new advancements in the PHP 7 series.  As with SunshinePHP, the PHP folks are warm, supportive, and invigorated.  The organizers of the London show have also assembled a talented group of presenters and I seem to be the only carryover from the previous show with my talk on MySQL 8.

A Comparison of the Communities


The MySQL and PHP Communities are both roughly the same age.  Both are now confident twenty years olds with plenty of recent self improvement.  PHP 7 is light years ahead in speed and capabilities from the four and five series.  MySQL is about to take a giant step with MySQL 8.  Both had version sixes that never quite made it into production but the subsequent engineering have produced much stronger products.  Both face competition from newer products but still dominate what is the modern implementation of the LAMP stack.  And the two products have strong communities working hard to improve the product.

The PHP Community is much better than its counterpart in aiding novices, mentoring, stressing the basics of good style in coding.  Many members have had to add JavaScript skills of one order or another in recent years but still try to keep PHP as their core tool.  And there are many more local PHP user groups than MySQL.

Next Up

I will be talking to the San Diego PHP User Group before heading to the Southern California Linux Expo. More on those shows later.

Monday, January 29, 2018

MySQL 8 Windowing Functions

Windowing functions are way to group rows of data for analysis. This provides a 'window' to look at only the relevant data only.  Plus there are about a dozen supporting functions to take advantage of all this. There is some carry over logically from the aggregate (or group by) functions but they open up some easy ways to dig statistically into your data.

Now for a contrived example.


mysql> Select row_number() over w as '#', 
       Name, Continent, Population, 
       sum(Population) over w as 'Pop' 
       from country where continent='South America' 
       window w as (partition by Continent Order by Continent);                                                                                
+----+------------------+---------------+------------+-----------+
| #  | Name             | Continent     | Population | Pop       |
+----+------------------+---------------+------------+-----------+
|  1 | Argentina        | South America |   37032000 | 345780000 |
|  2 | Bolivia          | South America |    8329000 | 345780000 |
|  3 | Brazil           | South America |  170115000 | 345780000 |
|  4 | Chile            | South America |   15211000 | 345780000 |
|  5 | Colombia         | South America |   42321000 | 345780000 |
|  6 | Ecuador          | South America |   12646000 | 345780000 |
|  7 | Falkland Islands | South America |       2000 | 345780000 |
|  8 | French Guiana    | South America |     181000 | 345780000 |
|  9 | Guyana           | South America |     861000 | 345780000 |
| 10 | Peru             | South America |   25662000 | 345780000 |
| 11 | Paraguay         | South America |    5496000 | 345780000 |
| 12 | Suriname         | South America |     417000 | 345780000 |
| 13 | Uruguay          | South America |    3337000 | 345780000 |
| 14 | Venezuela        | South America |   24170000 | 345780000 |
+----+------------------+---------------+------------+-----------+
14 rows in set (0.00 sec)


In the above example, we created a window named 'w' to allows us to 'peek' at the data arranged by Continent. I am cheating here by only looking at the Continent of South America. You will get a much better look at all this if you remove the WHERE continent='South America' from the query but I use it here for brevity.

And I used two Windows functions on the data from that window.  The ROW_NUMBER()  function provides a nice way to do exactly what the name of this function says.  And the SUM() function adds up all the population columns.

We can even set up ranks, or buckets, to divide up the data.



mysql> Select row_number() over w as '#', Name, Population, 
      ntile(5) over w as 'tile', sum(Population) over w as 'Pop' 
      from country where continent='South America' 
      window w as (partition by Continent Order by Continent);
+----+------------------+------------+------+-----------+
| #  | Name             | Population | tile | Pop       |
+----+------------------+------------+------+-----------+
|  1 | Argentina        |   37032000 |    1 | 345780000 |
|  2 | Bolivia          |    8329000 |    1 | 345780000 |
|  3 | Brazil           |  170115000 |    1 | 345780000 |
|  4 | Chile            |   15211000 |    2 | 345780000 |
|  5 | Colombia         |   42321000 |    2 | 345780000 |
|  6 | Ecuador          |   12646000 |    2 | 345780000 |
|  7 | Falkland Islands |       2000 |    3 | 345780000 |
|  8 | French Guiana    |     181000 |    3 | 345780000 |
|  9 | Guyana           |     861000 |    3 | 345780000 |
| 10 | Peru             |   25662000 |    4 | 345780000 |
| 11 | Paraguay         |    5496000 |    4 | 345780000 |
| 12 | Suriname         |     417000 |    4 | 345780000 |
| 13 | Uruguay          |    3337000 |    5 | 345780000 |
| 14 | Venezuela        |   24170000 |    5 | 345780000 |
+----+------------------+------------+------+-----------+
14 rows in set (0.00 sec)
mysql>

Here we used NTILE(5) to divide the results into five bucks.  By the way change the 5 to a 4 and you have quartiles or 100 for percentiles.  This does not really mean much statistically since the countries are arranged alphabetically.

So lets make this a little more statistically meaningful. Lets look at the population of South America with the largest countries first and broken into quartiles.


mysql> Select row_number() over w as '#', Name, Population, ntile(4) over w as 'tile', 
       sum(Population) over w as 'Pop' 
       from country where continent='South America' 
       window w as (partition by Continent Order by Population desc);
+----+------------------+------------+------+-----------+
| #  | Name             | Population | tile | Pop       |
+----+------------------+------------+------+-----------+
|  1 | Brazil           |  170115000 |    1 | 170115000 |
|  2 | Colombia         |   42321000 |    1 | 212436000 |
|  3 | Argentina        |   37032000 |    1 | 249468000 |
|  4 | Peru             |   25662000 |    1 | 275130000 |
|  5 | Venezuela        |   24170000 |    2 | 299300000 |
|  6 | Chile            |   15211000 |    2 | 314511000 |
|  7 | Ecuador          |   12646000 |    2 | 327157000 |
|  8 | Bolivia          |    8329000 |    2 | 335486000 |
|  9 | Paraguay         |    5496000 |    3 | 340982000 |
| 10 | Uruguay          |    3337000 |    3 | 344319000 |
| 11 | Guyana           |     861000 |    3 | 345180000 |
| 12 | Suriname         |     417000 |    4 | 345597000 |
| 13 | French Guiana    |     181000 |    4 | 345778000 |
| 14 | Falkland Islands |       2000 |    4 | 345780000 |
+----+------------------+------------+------+-----------+
14 rows in set (0.00 sec)

Now notice the Pop column as it suddenly becomes a very useful running total.

 I have only touched a few of the new functions to support Windowing functions but there is much more of interest here. 

Monday, January 8, 2018

Common Table Expressions (CTEs) Part 1

Occasionally at conference or a Meetup, someone will approach me and ask me for help with a MySQL problem.  Eight out of ten times their difficulty includes a sub query. "I get an error message about a corrugated or conflabugated sub query or some such,"  they say, desperate for help.  Usually with a bit of fumbling we can get their problem solved.  The problem is not a lack of knowledge for either of us but that sub queries are often hard to write. 

MySQL 8 will be the first version of the most popular database on the web with Common Table Expressions or CTEs.  CTEs are a way to create temporary tables and then use that temporary table for queries. Think of them as easy to write sub queries!

WITH is The Magic Word

The new CTE magic is indicated with the WITH clause.

mysql> WITH myfirstCTE 
      AS (SELECT * FROM world.city WHERE CountryCode='USA')
SELECT Name, District
FROM myfirstCTE 
ORDER BY Name
LIMIT 5;
+-------------+------------+
| Name        | District   |
+-------------+------------+
| Abilene     | Texas      |
| Akron       | Ohio       |
| Albany      | New York   |
| Albuquerque | New Mexico |
| Alexandria  | Virginia   |
+-------------+------------+
5 rows in set (0.01 sec)

mysql> 

So in the above example we create at temporary table named 'myfirstCTE' and then query from that table. The CTE itself isa very simple query, SELECT * FROM world.city. And then two columns are plucked out of 'myfirstCTE'. The CTE can even have modifiers on the query for ORDER BY, LIMIT or any other SQL qualifier.

A Little More Complex Example 


Lets try a little more complex query.

mysql> WITH secondCTE AS 
     (SELECT city.Name AS A, country.Name as B, city.District AS C 
     FROM city 
     JOIN country on (city.CountryCode=country.Code)) 
     SELECT A, B, C FROM secondCTE LIMIT 5;
+----------------+-------------+----------+
| A              | B           | C        |
+----------------+-------------+----------+
| Oranjestad     | Aruba       | –        |
| Kabul          | Afghanistan | Kabol    |
| Qandahar       | Afghanistan | Qandahar |
| Herat          | Afghanistan | Herat    |
| Mazar-e-Sharif | Afghanistan | Balkh    |
+----------------+-------------+----------+
5 rows in set (0.04 sec)

mysql> 

Here we select columns A, B, C from temporary table 'secondCTE'. In this case it is easier to see that we are getting data from the CTE and not the two tables city or country.  At least directly.

The reason I use this examples is often times you have to join two or more tables and the columns with the name name have different sources, here name of country and name of city.  Yes, you could alias them in a sub query but the legibility of the query increases.  And as query complexity builds things like the old N+1 problems raise their ugly heads.

More Than One Way to Skin a Cat 

So lets play again with the information from the second example but this time we will write two separate CTEs and then join them together.  First we are going to get the Name column from the city table and the Name column from the country table. We do have to drag along the columns for a join in the CTEs.  I think for many with limited SQL experience that this version is a little easier to comprehend than the second example.

mysql> WITH threeA AS (SELECT Name AS N1, CountryCode AS x FROM city),
    -> threeB AS (SELECT Name AS N2, Code as y FROM country)
    -> SELECT N1, N2 FROM threeA JOIN threeB WHERE threeA.x = threeB.y LIMIT 5;
+----------------+-------------+
| N1             | N2          |
+----------------+-------------+
| Oranjestad     | Aruba       |
| Kabul          | Afghanistan |
| Qandahar       | Afghanistan |
| Herat          | Afghanistan |
| Mazar-e-Sharif | Afghanistan |
+----------------+-------------+
5 rows in set (0.01 sec)

mysql> 
 
Hopefully the use of CTEs will produce easier to read SQL statements. 

Tuesday, January 2, 2018

Two New MySQL Books!

There are two new MySQL books both from Apress Press. One is an in depth master course on the subject and the other is a quick introduction.


ProMySQL NDB Cluster is subtitled Master the MySQL Cluster Lifecycle and at nearly 700 pages it is vital resource to anyone that runs or is thinking about running NDB Cluster. The authors, Jesper Wisborg Krogh and Mikiya Okuno, have distilled their vast knowledge of this difficult subject in a detail packed but easily readable book.  MySQL Cluster is much more complex in many areas than a regular MySQL server and here you will find all those details. If you run MySQL NDB Cluster then you need this book. The partitioning information in chapter 2 is worth the price of the book alone.  I am only a third of the way through the book and have found it as clear and concise as any technical book I have read and it is actually an easy read. 

MariaDB and MySQL Common Table Expressions and Window Functions Revealed by Daniel Bartholomew is a slender introduction to CTEs and Window functions.  If you were raised on MySQL and do not know either subject well, then I highly recommend this book.  CTEs are going to have a big impact on the way developers write sub queries and may cause self joins to become extinct.  Windowing functions will bring a new eave of analytical analysis to MySQL. This book is just over 100 pages and has useful examples for novices in either area. 

More books! Well yes, there are more MySQL books in the works so save your pocket change in order to buy them when they appear.

Tuesday, December 5, 2017

JSON_TABLE Part 2

The JSON_TABLE function has some very interesting uses. JSON data is great for schemaless jobs but what about when you need to pretend you have a schema and/or need to create tables based on values in a JSON document.

Existence and Defaults

Let start with some simple data:

mysql> SELECT * FROM t1;
+-----+--------------------------+
| _id | doc                      |
+-----+--------------------------+
|   1 | {"x": 0, "name": "Bill"} |
|   2 | {"x": 1, "name": "Mary"} |
|   3 | {"name": "Pete"}         |
+-----+--------------------------+
3 rows in set (0.00 sec)
We have three documents and you will notice that the third record is missing a 'x' key/value pair. We can use JSON_TABLE to provide a value when that value is missing. For this example a missing value of 'x' is given a value of 999.

mysql> select * from t1, json_table(doc,"$" columns 
   (xHasValue int path "$.x" default '999' on empty, 
    hasname char(5) exists path "$.name", 
    mojo char(5) exists path "$.mojo")) as t2;
+-----+--------------------------+-----------+---------+------+
| _id | doc                      | xHasValue | hasname | mojo |
+-----+--------------------------+-----------+---------+------+
|   1 | {"x": 0, "name": "Bill"} |         0 | 1       | 0    |
|   2 | {"x": 1, "name": "Mary"} |         1 | 1       | 0    |
|   3 | {"name": "Pete"}         |       999 | 1       | 0    |
+-----+--------------------------+-----------+---------+------+
3 rows in set (0.00 sec)

mysql> 

Do we have that data?

We can also use the exists qualifier to test the existence of a key. The last two line in the query exists checks for a name which does exists and reports a '1'. And check for mojo which does not exist and reports a '0'. We can of course use these binary fields in our query.

mysql> select * from t1, json_table(doc,"$" columns 
   (xHasValue int path "$.x" default '999' on empty, 
    hasname char(5) exists path "$.name", 
    mojo char(5) exists path "$.mojo")) as t2 
    WHERE hasname=1 and xHasValue=1;
+-----+--------------------------+-----------+---------+------+
| _id | doc                      | xHasValue | hasname | mojo |
+-----+--------------------------+-----------+---------+------+
|   2 | {"x": 1, "name": "Mary"} |         1 | 1       | 0    |
+-----+--------------------------+-----------+---------+------+
1 row in set (0.01 sec)

mysql> 

Monday, December 4, 2017

JSON_TABLE

JSON data is a wonderful way to store data without needing a schema but what about when you have to yank that data out of the database and apply some sort of formatting to that data?  Well, then you need JSON_TABLE.

JSON_TABLE takes free form JSON data and applies some formatting to it.  For this example we will use the world_x sample database's countryinfo table.  What is desired is the name of the country and the year of independence but only for the years after 1992.  Sound like a SQL query against JSON data, right? Well that is exactly what we are doing.

We tell the MySQL server that we are going to take the $.Name and $.IndepYear key's values from the JSON formatted doc column in  the table, format them into a string and a integer respectively, and alias the key value's name to a table column name that we can use for qualifiers in an SQL statement.

mysql> select country_name, IndyYear from countryinfo, json_table(doc,"$" columns (country_name char(20) path "$.Name", IndyYear int path "$.IndepYear")) as stuff where IndyYear > 1992;
+----------------+----------+
| country_name   | IndyYear |
+----------------+----------+
| Czech Republic |     1993 |
| Eritrea        |     1993 |
| Palau          |     1994 |
| Slovakia       |     1993 |
+----------------+----------+
4 rows in set, 67 warnings (0.00 sec)


mysql>

So what else can JSON_TABLE do? How about default values for missing values? Or checking that a key exists in a document. More on that next time. For now if you want to try MySQL 8.0.3 with JSON_TABLES, you need to head to Labs.MySQL.COM to test this experimental feature.

Tuesday, November 7, 2017

MySQL Queries No Waiting

Last time we looked at SKIP LOCKED where rows locked by another process were skipped. NOWAIT informs the server to return immediately IF the desired rows can not be locked.

How To Use NOWAIT

Start a connection to your MySQL 8 server, start a transaction, and make a query to lock up part of the data.
mysql>START TRANSACTION;
mysql>SELECT * FROM city WHERE countryCode = 'USA' FOR UPDATE;

On a second connection, start a transaction and issue a SELECT query with NOWAIT.
mysql>START TRANSACTION;
mysql>SELECT * FROM city WHERE countryCode = 'USA' FOR UPDATE NOWAIT;

The second connection will get a message saying 'statement aborted because lock(s) could not be acquired immediately and NOWAIT is SET

So if you can come back later to lock the records or just can not wait around for your lock to be issued, NOWAIT is the keyword to add to your queries.