------------------------------------------------------------
revno: 5016
tags: clone-mysql-5.1.66-ndb-7.0.36-src-build
committer: Maitrayi Sabaratnam <maitrayi.sabaratnam@oracle.com>
branch nick: mysql-5.1-telco-7.0-bugfix
timestamp: Thu 2012-11-01 20:13:39 +0100
message:
  BUG#11761263: fix code
------------------------------------------------------------
revno: 5015
committer: Maitrayi Sabaratnam <maitrayi.sabaratnam@oracle.com>
branch nick: mysql-5.1-telco-7.0-bugfix
timestamp: Wed 2012-10-31 16:02:54 +0100
message:
  Bug#11761263 - NDBD STDOUT LOG SHOULD TRACK ARBITRATOR
------------------------------------------------------------
revno: 5014 [merge]
committer: Maitrayi Sabaratnam <maitrayi.sabaratnam@oracle.com>
branch nick: mysql-5.1-telco-7.0-bugfix
timestamp: Wed 2012-10-31 11:15:15 +0100
message:
  Merge 6.3->7.0
    ------------------------------------------------------------
    revno: 2585.188.19
    tags: clone-mysql-5.1.66-ndb-6.3.50-src-build
    committer: Maitrayi Sabaratnam <maitrayi.sabaratnam@oracle.com>
    branch nick: mysql-5.1-telco-6.3-bugfix
    timestamp: Wed 2012-10-31 10:17:12 +0100
    message:
      Bug#14798432 - TIMESLICE DUMPING FRAGMENTION INFO WHEN THERE ARE MANY TABLES.
------------------------------------------------------------
revno: 5013
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Tue 2012-10-30 12:44:50 +0200
message:
  wl#5929 sp_marker-x4.diff
  DBTC LqhTransConf::Marker has no table/frag
------------------------------------------------------------
revno: 5012 [merge]
committer: Frazer Clement <frazer.clement@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Tue 2012-10-30 00:30:30 +0000
message:
  Merge 6.3->7.0
    ------------------------------------------------------------
    revno: 2585.188.18
    committer: Frazer Clement <frazer.clement@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Mon 2012-10-29 18:34:05 +0000
    message:
      Bug #14828998 	NDB : SLOW FILESYSTEM CAN CAUSE DIH FILE PAGE EXHAUSTION
      
      Limit number of concurrent table definition updates that DIH can issue.
      This avoids a slow filesystem exerting pressure on DIH File page buffers which
      can lead to a crash if they're exhausted.
      
      Add a testcase showing the crash behaviour using error insert and 
      showing that it fixes the problem.
------------------------------------------------------------
revno: 5011
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Sat 2012-10-27 13:03:43 +0300
message:
  wl#5929 sp_marker-x3.diff
  use instance number (not key) in c-a-m list
------------------------------------------------------------
revno: 5010
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Wed 2012-10-24 13:05:26 +0300
message:
  wl#5929 sp_marker-x2.diff
  handle c-a-m databuffer exhaustion
------------------------------------------------------------
revno: 5009
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Tue 2012-10-23 14:06:05 +0300
message:
  wl#5929 sp_marker-x1.diff
  avoid duplicate node/instance in c-a-m list
------------------------------------------------------------
revno: 5008 [merge]
committer: Martin Skold <Martin.Skold@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Mon 2012-10-22 12:48:20 +0200
message:
  Merge from 6.3 (5.1.66)
    ------------------------------------------------------------
    revno: 2585.188.17
    committer: Martin Skold <Martin.Skold@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Mon 2012-10-22 12:08:12 +0200
    message:
      Ignore C4090 warnings (Windows)
    ------------------------------------------------------------
    revno: 2585.188.16
    committer: Martin Skold <Martin.Skold@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Mon 2012-10-22 12:07:15 +0200
    message:
      Regenerated result
    ------------------------------------------------------------
    revno: 2585.188.15 [merge]
    committer: Martin Skold <Martin.Skold@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Mon 2012-10-22 11:11:12 +0200
    message:
      Merged in 5.1.66
        ------------------------------------------------------------
        revno: 2555.937.228
        tags: clone-5.1.66-build
        committer: Tor Didriksen <tor.didriksen@oracle.com>
        branch nick: 5.1
        timestamp: Wed 2012-09-05 17:40:13 +0200
        message:
          Bug#13734987 MEMORY LEAK WITH I_S/SHOW AND VIEWS WITH SUBQUERY
          
          In fill_schema_table_by_open(): free item list before restoring active arena.
        ------------------------------------------------------------
        revno: 2555.937.227
        committer: Annamalai Gurusami <annamalai.gurusami@oracle.com>
        branch nick: mysql-5.1
        timestamp: Mon 2012-09-03 11:33:05 +0530
        message:
          The test case result file must not depend on the page size used.
          So removing the maximum row size from the error message and 
          replacing it with text. 
        ------------------------------------------------------------
        revno: 2555.937.226
        committer: Annamalai Gurusami <annamalai.gurusami@oracle.com>
        branch nick: mysql-5.1
        timestamp: Fri 2012-08-31 15:42:00 +0530
        message:
          Bug #13453036 ERROR CODE 1118: ROW SIZE TOO LARGE - EVEN 
          THOUGH IT IS NOT.
          
          The following error message is misleading because it claims 
          that the BLOB space is not counted.  
          
          "ERROR 1118 (42000): Row size too large. The maximum row size for 
          the used table type, not counting BLOBs, is 8126. You have to 
          change some columns to TEXT or BLOBs"
          
          When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
          the BLOB prefix is stored inline along with the row.  So 
          the above error message is changed as follows depending on
          the row format used:
          
          For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
          message is as follows:
          
          "ERROR 42000: Row size too large (> 8126). Changing some
          columns to TEXT or BLOB may help. In current row format, 
          BLOB prefix of 0 bytes is stored inline."
          
          For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
          message is as follows:
          
          "ERROR 42000: Row size too large (> 8126). Changing some
          columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or 
          ROW_FORMAT=COMPRESSED may help. In current row
          format, BLOB prefix of 768 bytes is stored inline."
          
          rb://1252 approved by Marko Makela
        ------------------------------------------------------------
        revno: 2555.937.225
        committer: Marko M?kel? <marko.makela@oracle.com>
        branch nick: mysql-5.1
        timestamp: Fri 2012-08-31 09:51:27 +0300
        message:
          Add forgotten have_debug.inc to a test.
        ------------------------------------------------------------
        revno: 2555.937.224
        committer: Marko M?kel? <marko.makela@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-08-30 21:53:41 +0300
        message:
          Bug#14554000 CRASH IN PAGE_REC_GET_NTH_CONST(NTH=0) DURING COMPRESSED
          PAGE SPLIT
          
          page_rec_get_nth_const(): Map nth==0 to the page infimum.
          
          btr_compress(adjust=TRUE): Add a debug assertion for nth>0. The cursor
          should never be positioned on the page infimum.
          
          btr_index_page_validate(): Add test instrumentation for checking the
          return values of page_rec_get_nth_const() during CHECK TABLE, and for
          checking that the page directory slot 0 always contains only one
          record, the predefined page infimum record.
          
          page_cur_delete_rec(), page_delete_rec_list_end(): Add debug
          assertions guarding against accessing the page slot 0.
          
          page_copy_rec_list_start(): Clarify a comment about ret_pos==0.
          
          rb:1248 approved by Jimmy Yang
        ------------------------------------------------------------
        revno: 2555.937.223
        committer: Marko M?kel? <marko.makela@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-08-30 21:49:24 +0300
        message:
          Bug#14547952: DEBUG BUILD FAILS ASSERTION IN RECORDS_IN_RANGE()
          
          ha_innodb::records_in_range(): Remove a debug assertion
          that prohibits an open range (full table).
          
          The patch by Jorgen Loland only removed the assertion from the
          built-in InnoDB, not from the InnoDB Plugin.
        ------------------------------------------------------------
        revno: 2555.937.222
        committer: Jorgen Loland <jorgen.loland@oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-08-28 14:51:01 +0200
        message:
          Bug#14547952: DEBUG BUILD FAILS ASSERTION IN RECORDS_IN_RANGE()
          
          ha_innobase::records_in_range(): Remove a debug assertion
          that prohibits an open range (full table).
          This assertion catches unnecessary calls to this method, 
          but such calls are not harming correctness.
        ------------------------------------------------------------
        revno: 2555.937.221
        committer: Marko M?kel? <marko.makela@oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-08-21 10:47:17 +0300
        message:
          Fix regression from Bug#12845774 OPTIMISTIC INSERT/UPDATE USES WRONG
          HEURISTICS FOR COMPRESSED PAGE SIZE
          
          The fix of Bug#12845774 was supposed to skip known-to-fail
          btr_cur_optimistic_insert() calls. There was only one such call, in
          btr_cur_pessimistic_update(). All other callers of
          btr_cur_pessimistic_insert() would release and reacquire the B-tree
          page latch before attempting the pessimistic insert. This would allow
          other threads to restructure the B-tree, allowing (and requiring) the
          insert to succeed as an optimistic (single-page) operation.
          
          Failure to attempt an optimistic insert before a pessimistic one would
          trigger an attempt to split an empty page.
          
          rb:1234 approved by Sunny Bains
        ------------------------------------------------------------
        revno: 2555.937.220
        committer: Mattias Jonsson <mattias.jonsson@oracle.com>
        branch nick: topush-5.1
        timestamp: Mon 2012-08-20 12:39:36 +0200
        message:
          Bug#13025132 - PARTITIONS USE TOO MUCH MEMORY
          
          pre-push fix, removed unused variable.
        ------------------------------------------------------------
        revno: 2555.937.219 [merge]
        committer: Mattias Jonsson <mattias.jonsson@oracle.com>
        branch nick: topush-5.1
        timestamp: Mon 2012-08-20 11:18:17 +0200
        message:
          merge
            ------------------------------------------------------------
            revno: 2555.968.2
            committer: Mattias Jonsson <mattias.jonsson@oracle.com>
            branch nick: b13025132-51
            timestamp: Fri 2012-08-17 14:25:32 +0200
            message:
              Bug#13025132 - PARTITIONS USE TOO MUCH MEMORY
              
              Additional patch to remove the part_id -> ref_buffer offset.
              
              The partitioning id and the associate record buffer can
              be found without having to calculate it.
              
              By initializing it for each used partition, and then reuse
              the key-buffer from the queue, it is not needed to have
              such map.
            ------------------------------------------------------------
            revno: 2555.968.1
            committer: Mattias Jonsson <mattias.jonsson@oracle.com>
            branch nick: b13025132-51
            timestamp: Wed 2012-08-15 14:31:26 +0200
            message:
              Bug#13025132 - PARTITIONS USE TOO MUCH MEMORY
              
              The buffer for the current read row from each partition
              (m_ordered_rec_buffer) used for sorted reads was
              allocated on open and freed when the ha_partition handler
              was closed or destroyed.
              
              For tables with many partitions and big records this could
              take up too much valuable memory.
              
              Solution is to only allocate the memory when it is needed
              and free it when nolonger needed. I.e. allocate it in
              index_init and free it in index_end (and to handle failures
              also free it on reset, close etc.)
              
              Also only allocating needed memory, according to
              partitioning pruning.
              
              Manually tested that it does not use as much memory and
              releases it after queries.
        ------------------------------------------------------------
        revno: 2555.937.218
        committer: Alexander Barkov <alexander.barkov@oracle.com>
        branch nick: mysql-5.1
        timestamp: Fri 2012-08-17 13:14:04 +0400
        message:
          Backporting Bug 14100466 from 5.6.
        ------------------------------------------------------------
        revno: 2555.937.217
        committer: Marko M?kel? <marko.makela@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-08-16 17:45:39 +0300
        message:
          Bug#12595091 POSSIBLY INVALID ASSERTION IN BTR_CUR_PESSIMISTIC_UPDATE()
          
          Facebook got a case where the page compresses really well so that
          btr_cur_optimistic_update() returns DB_UNDERFLOW, but when a record
          gets updated, the compression rate radically changes so that
          btr_cur_insert_if_possible() can not insert in place despite
          reorganizing/recompressing the page, leading to the assertion failing.
          
          rb:1220 approved by Sunny Bains
        ------------------------------------------------------------
        revno: 2555.937.216
        committer: Marko M?kel? <marko.makela@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-08-16 17:37:52 +0300
        message:
          Bug#12845774 OPTIMISTIC INSERT/UPDATE USES WRONG HEURISTICS FOR
          COMPRESSED PAGE SIZE
          
          This was submitted as MySQL Bug 61456 and a patch provided by
          Facebook. This patch follows the same idea, but instead of adding a
          parameter to btr_cur_pessimistic_insert(), we simply remove the
          btr_cur_optimistic_insert() call there and add it to the only caller
          that needs it.
          
          btr_cur_pessimistic_insert(): Do not try btr_cur_optimistic_insert().
          
          btr_insert_on_non_leaf_level_func(): Invoke btr_cur_optimistic_insert()
          before invoking btr_cur_pessimistic_insert().
          
          btr_cur_pessimistic_update(): Clarify in a comment why it is not
          necessary to invoke btr_cur_optimistic_insert().
          
          btr_root_raise_and_insert(): Assert that the root page is not empty.
          This could happen if a pessimistic insert (involving a split or merge)
          is performed without first attempting an optimistic (intra-page) insert.
          
          rb:1219 approved by Sunny Bains
        ------------------------------------------------------------
        revno: 2555.937.215
        committer: Marko M?kel? <marko.makela@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-08-16 17:31:23 +0300
        message:
          Bug#13523839 ASSERTION FAILURES ON COMPRESSED INNODB TABLES
          
          btr_cur_optimistic_insert(): Remove a bogus assertion. The insert may
          fail after reorganizing the page.
          
          btr_cur_optimistic_update(): Do not attempt to reorganize compressed pages,
          because compression may fail after reorganization.
          
          page_copy_rec_list_start(): Use page_rec_get_nth() to restore to the
          ret_pos, which may also be the page infimum.
          
          rb:1221
        ------------------------------------------------------------
        revno: 2555.937.214
        committer: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
        branch nick: Bug13596613_user_var
        timestamp: Tue 2012-08-14 14:11:01 +0530
        message:
          Bug#13596613:SHOW SLAVE STATUS GIVES WRONG OUTPUT WITH
          MASTER-MASTER AND USING SET USE
          
          Problem:
          =======
          In a master-master set-up, a master can show a wrong
          'SHOW SLAVE STATUS' output.
          
          Requirements:
          - master-master
          - log_slave_updates
          
          This is caused when using SET user-variables and then using
          it to perform writes. From then on the master that performed
          the insert will have a SHOW SLAVE STATUS that is wrong and  
          it will never get updated until a write happens on the other
          master. On"Master A" the "exec_master_log_pos" is not
          getting updated.
          
          Analysis:
          ========
          Slave receives a "User_var" event from the master and after
          applying the event, when "log_slave_updates" option is
          enabled the slave tries to write this applied event into
          its own binary log. At the time of writing this event the
          slave should use the "originating server-id". But in the
          above case the sever always logs the  "user var events"
          by using its global server-id. Due to this in a
          "master-master" replication when the event comes back to the
          originating server the "User_var_event" doesn't get skipped.
          "User_var_events" are context based events and they always
          follow with a query event which marks their end of group.
          Due to the above mentioned problem with "User_var_event"
          logging the "User_var_event" never gets skipped where as
          its corresponding "query_event" gets skipped. Hence the
          "User_var" event always waits for the next "query event"
          and the "Exec_master_log_position" does not get updated
          properly.
          
          Fix:
          ===
          `MYSQL_BIN_LOG::write' function is used to write events
          into binary log. Within this function a new object for
          "User_var_log_event" is created and this new object is used
          to write the "User_var" event in the binlog. "User var"
          event is inherited from "Log_event". This "Log_event" has
          different overloaded constructors. When a "THD" object
          is present "Log_event(thd,...)" constructor should be used
          to initialise the objects and in the absence of a valid
          "THD" object "Log_event()" minimal constructor should be
          used. In the above mentioned problem always default minimal
          constructor was used which is incorrect. This minimal
          constructor is replaced with "Log_event(thd,...)".
        ------------------------------------------------------------
        revno: 2555.937.213
        committer: Venkata Sidagam <venkata.sidagam@oracle.com>
        branch nick: 5.1
        timestamp: Sat 2012-08-11 15:43:04 +0530
        message:
          Bug #13115401: -SSL-KEY VALUE IS NOT VALIDATED AND IT ALLOWS INSECURE 
                         CONNECTIONS IF SPE
          
          Problem description: -ssl-key value is not validated, you can assign any bogus 
          text to --ssl-key and it is not verified that it exists, and more importantly, 
          it allows the client to connect to mysqld.
          
          Fix: Added proper validations checks for --ssl-key.
          
          Note:
          1) Documentation changes require for 5.1, 5.5, 5.6 and trunk in the sections
             listed below and the details are :
          
           http://dev.mysql.com/doc/refman/5.6/en/ssl-options.html#option_general_ssl
              and
           REQUIRE SSL section of
           http://dev.mysql.com/doc/refman/5.6/en/grant.html
          
          2) Client having with option '--ssl', should able to get ssl connection. This 
          will be implemented as part of separate fix in 5.6 and trunk.
        ------------------------------------------------------------
        revno: 2555.937.212
        committer: Sergey Glukhov <sergey.glukhov@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-08-09 15:34:52 +0400
        message:
          Bug #14409015 	MEMORY LEAK WHEN REFERENCING OUTER FIELD IN HAVING
          When resolving outer fields, Item_field::fix_outer_fields()
          creates new Item_refs for each execution of a prepared statement, so
          these must be allocated in the runtime memroot. The memroot switching
          before resolving JOIN::having causes these to be allocated in the
          statement root, leaking memory for each PS execution.
        ------------------------------------------------------------
        revno: 2555.937.211 [merge]
        committer: Marko M?kel? <marko.makela@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-08-09 10:48:25 +0300
        message:
          Merge from mysql-5.1 to working copy.
            ------------------------------------------------------------
            revno: 2555.967.1 [merge]
            committer: Sunanda Menon <sunanda.menon@oracle.com>
            branch nick: mysql-5.1
            timestamp: Thu 2012-08-09 08:50:43 +0200
            message:
              Merge from mysql-5.1.65-release
                ------------------------------------------------------------
                revno: 2555.966.1 [merge]
                tags: mysql-5.1.65
                committer: Bjorn Munch <bjorn.munch@oracle.com>
                branch nick: mysql-5.1.65-release
                timestamp: Thu 2012-07-12 10:00:14 +0200
                message:
                  Merge unpushed changes from 5.1.64-release
                    ------------------------------------------------------------
                    revno: 2555.965.4
                    committer: Kent Boortz <kent.boortz@oracle.com>
                    branch nick: mysql-5.1.64-release
                    timestamp: Tue 2012-06-26 16:30:15 +0200
                    message:
                      Solve a linkage problem with "libmysqld" on several Solaris platforms:
                      a multiple definition of 'THD::clear_error()' in (at least)
                      libmysqld.a(lib_sql.o) and libmysqld.a(libfederated_a-ha_federated.o).
                      
                      Patch provided by Ramil Kalimullin.
                    ------------------------------------------------------------
                    revno: 2555.965.3
                    committer: Joerg Bruehe <joerg.bruehe@oracle.com>
                    branch nick: mysql-5.1.64-release
                    timestamp: Thu 2012-06-21 16:26:50 +0200
                    message:
                      Fixing wrong comment syntax (discovered by Kent)
                    ------------------------------------------------------------
                    revno: 2555.965.2
                    committer: Kent Boortz <kent.boortz@oracle.com>
                    branch nick: mysql-5.1.64-release
                    timestamp: Wed 2012-06-20 13:10:13 +0200
                    message:
                      Version for this release build is 5.1.64
                    ------------------------------------------------------------
                    revno: 2555.965.1 [merge]
                    committer: Kent Boortz <kent.boortz@oracle.com>
                    branch nick: mysql-5.1.64-release
                    timestamp: Wed 2012-06-20 13:06:32 +0200
                    message:
                      Merge
        ------------------------------------------------------------
        revno: 2555.937.210
        committer: Marko M?kel? <marko.makela@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-08-09 09:55:29 +0300
        message:
          Bug#14399148 INNODB TABLES UNDER LOAD PRODUCE DUPLICATE COPIES OF ROWS
          IN QUERIES
          
          This bug was caused by an incorrect fix of
          Bug#13807811 BTR_PCUR_RESTORE_POSITION() CAN SKIP A RECORD
          
          There was nothing wrong with btr_pcur_restore_position(), but with the
          use of it in the table scan during index creation.
          
          rb:1206 approved by Jimmy Yang
        ------------------------------------------------------------
        revno: 2555.937.209
        committer: Rohit Kalhans <rohit.kalhans@oracle.com>
        branch nick: mysql-5.1-11757312
        timestamp: Wed 2012-08-08 22:15:46 +0530
        message:
          BUG#11757312: MYSQLBINLOG DOES NOT ACCEPT INPUT FROM STDIN
          WHEN STDIN IS A PIPE
                      
          Problem: Mysqlbinlog does not accept the input from STDIN when 
          STDIN is a pipe. This prevents the users from passing the input file
          through a shell pipe.    
          
          Background: The my_seek() function does not check if the file descriptor
          passed to it is regular (seekable) file. The check_header() function in
          mysqlbinlog calls the my_b_seek() unconditionally and it fails when
          the underlying file is a PIPE.  
                      
          Resolution: We resolve this problem by checking if the underlying file
          is a regular file by using my_fstat() before calling my_b_seek(). 
          If the underlying file is not seekable we skip the call to my_b_seek()
          in check_header().
        ------------------------------------------------------------
        revno: 2555.937.208
        committer: Nirbhay Choubey <nirbhay.choubey@oracle.com>
        branch nick: 5.1
        timestamp: Tue 2012-08-07 18:58:19 +0530
        message:
          Bug#13928675 MYSQL CLIENT COPYRIGHT NOTICE MUST
                       SHOW 2012 INSTEAD OF 2011
          
          * Added a new macro to hold the current year :
            COPYRIGHT_NOTICE_CURRENT_YEAR
          * Modified ORACLE_WELCOME_COPYRIGHT_NOTICE macro
            to take the initial year as parameter and pick
            current year from the above mentioned macro.
        ------------------------------------------------------------
        revno: 2555.937.207
        committer: Harin Vadodaria<harin.vadodaria@oracle.com>
        branch nick: 51-bug14068244
        timestamp: Tue 2012-08-07 16:23:53 +0530
        message:
          Bug#14068244: INCOMPATIBILITY BETWEEN LIBMYSQLCLIENT/LIBMYSQLCLIENT_R
                        AND LIBCRYPTO
          
          Problem: libmysqlclient_r exports symbols from yaSSL library which
                   conflict with openSSL symbols. This issue is related to symbols
                   used by CURL library and are defined in taocrypt. Taocrypt has
                   dummy implementation of these functions. Due to this when a
                   program which uses libcurl library functions is compiled using
                   libmysqlclient_r and libcurl, it hits segmentation fault in
                   execution phase.
          
          Solution: MySQL should not be exporting such symbols. However, these
                    functions are not used by MySQL code at all. So avoid compiling
                    them in the first place.
        ------------------------------------------------------------
        revno: 2555.937.206
        committer: Chaithra Gopalareddy <chaithra.gopalareddy@oracle.com>
        branch nick: mysql-5.1
        timestamp: Sun 2012-08-05 16:29:28 +0530
        message:
          Bug #14099846: EXPORT_SET CRASHES DUE TO OVERALLOCATION OF MEMORY
          
          Backport the fix from 5.6 to 5.1
          Base bug number : 11765562
        ------------------------------------------------------------
        revno: 2555.937.205
        committer: Joerg Bruehe <joerg.bruehe@oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-07-31 20:41:46 +0200
        message:
          INSTALL-BINARY placeholder: change invalid URLs (request from Kristofer)
        ------------------------------------------------------------
        revno: 2555.937.204
        committer: Tor Didriksen <tor.didriksen@oracle.com>
        branch nick: 5.1
        timestamp: Fri 2012-07-27 09:13:10 +0200
        message:
          Bug#14111180 HANDLE_FATAL_SIGNAL IN PTR_COMPARE_1 / QUEUE_INSERT
          
          Space available for merging was calculated incorrectly.
        ------------------------------------------------------------
        revno: 2555.937.203
        committer: Venkata Sidagam <venkata.sidagam@oracle.com>
        branch nick: mysql-5.1-59107
        timestamp: Fri 2012-07-27 12:05:37 +0530
        message:
          Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
          
          Fixed the missing of federated/include folder at the time 
          of preparing package distribution, issue happens only in 5.1
        ------------------------------------------------------------
        revno: 2555.937.202
        committer: Praveenkumar Hulakund <praveenkumar.hulakund@oracle.com>
        branch nick: mysql_5_1
        timestamp: Thu 2012-07-26 23:44:43 +0530
        message:
          BUG#13868860 - LIMIT '5' IS EXECUTED WITHOUT ERROR WHEN '5' 
                         IS PLACE HOLDER AND USE SERVER-SIDE 
          
          Analysis:
          LIMIT always takes nonnegative integer constant values. 
          
          http://dev.mysql.com/doc/refman/5.6/en/select.html
          
          So parsing of value '5' for LIMIT in SELECT fails.
          
          But, within prepared statement, LIMIT parameters can be
          specified using '?' markers. Value for the parameter can
          be supplied while executing the prepared statement.
          
          Passing string values, float or double value for LIMIT
          works well from CLI. Because, while setting the value
          for the parameters from the variable list (added using
          SET), if the value is for parameter LIMIT then its 
          converted to integer value. 
          
          But, when prepared statement is executed from the other
          interfaces as J connectors, or C applications etc.
          The value for the parameters are sent to the server
          with execute command. Each item in log has value and
          the data TYPE. So, While setting parameter value
          from this log, value is set to all the parameters
          with the same data type as passed.
          But here logic to convert value to integer type
          if its for LIMIT parameter is missing.
          Because of this,string '5' is set to LIMIT.
          And the same is logged into the binlog file too. 
          
          Fix:
          When executing prepared statement having parameter for
          CLI it worked fine, as the value set for the parameter
          is converted to integer. And this failed in other 
          interfaces as J connector,C Applications etc as this 
          conversion is missing.
          
          So, as a fix added check while setting value for the
          parameters. If the parameter is for LIMIT value then
          its converted to integer value.
        ------------------------------------------------------------
        revno: 2555.937.201
        committer: Venkata Sidagam <venkata.sidagam@oracle.com>
        branch nick: mysql-5.1-59107
        timestamp: Thu 2012-07-26 23:23:04 +0530
        message:
          Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
          
          Fix for pb2 test failure.
        ------------------------------------------------------------
        revno: 2555.937.200
        committer: Nirbhay Choubey <nirbhay.choubey@oracle.com>
        branch nick: B13741677-5.1
        timestamp: Thu 2012-07-26 21:47:03 +0530
        message:
          Bug#13741677 MYSQL_SECURE_INSTALLATION DOES NOT
                       WORK + SAVES ROOT PASSWORD TO DISK!
          
          The secure installation scripts connect to the
          server by storing the password in a temporary
          option file. Now, if the script gets killed or
          fails for some reason, the removal of the option
          file may not take place.
          
          This patch introduces following enhancements :
          * (.sh) Made sure that cleanup happens at every
            call to 'exit 1'. This is performed implicitly
            by END{} in pl.in.
          * (.pl.in) Added a warning in case unlink fails
            to delete the option/query files.
          * (.sh/.pl.in) Added more signals to the signal
            handler list. SIG# 1, 3, 6, 15
        ------------------------------------------------------------
        revno: 2555.937.199
        committer: Tor Didriksen <tor.didriksen@oracle.com>
        branch nick: 5.1
        timestamp: Thu 2012-07-26 15:05:24 +0200
        message:
          Backport of Bug#14171740 65562: STRING::SHRINK SHOULD BE A NO-OP WHEN ALLOCED=0
        ------------------------------------------------------------
        revno: 2555.937.198
        committer: Venkata Sidagam <venkata.sidagam@oracle.com>
        branch nick: mysql-5.1-59107
        timestamp: Thu 2012-07-26 15:09:22 +0530
        message:
          Bug #12876932 - INCORRECT SELECT RESULT ON FEDERATED TABLE
          
          Problem description:
          Table 't' created with two colums having compound index on both the 
          columns under innodb/myisam engine at remote machine. In the local 
          machine same table is created undet the federated engine.
          A select having where clause with along 'AND' operation gives wrong 
          results on local machine.
          
          Analysis: 
          The given query at federated engine is wrongly transformed by 
          federated::create_where_from_key() function and the same was sent to 
          the remote machine. Hence the local machine is showing wrong results.
          
          Given query "select c1 from t where c1 <= 2 and c2 = 1;"
          Query transformed, after ha_federated::create_where_from_key() function is:
          SELECT `c1`, `c2` FROM `t` WHERE  (`c1` IS NOT NULL ) AND 
          ( (`c1` >= 2)  AND  (`c2` <= 1) ) and the same sent to real_query().
          In the above the '<=' and '=' conditions were transformed to '>=' and 
          '<=' respectively.
          
          ha_federated::create_where_from_key() function behaving as below:
          The key_range is having both the start_key and end_key. The start_key 
          is used to get "(`c1` IS NOT NULL )" part of the where clause, this 
          transformation is correct. The end_key is used to get "( (`c1` >= 2) 
          AND  (`c2` <= 1) )", which is wrong, here the given conditions('<=' and '=') 
          are changed as wrong conditions('>=' and '<=').
          The end_key is having {key = 0x39fa6d0 "", length = 10, keypart_map = 3, 
          flag = HA_READ_AFTER_KEY}
          
          The store_length is having value '5'. Based on store_length and length 
          values the condition values is applied in HA_READ_AFTER_KEY switch case.
          The switch case 'HA_READ_AFTER_KEY' is applicable to only the last part of 
          the end_key and for previous parts it is going to 'HA_READ_KEY_OR_NEXT' case, 
          here the '>=' is getting added as a condition instead of '<='.
          
          Fix:
          Updated the 'if' condition in 'HA_READ_AFTER_KEY' case to affect for all 
          parts of the end_key. i.e 'i > 0' will used for end_key, Hence added it in 
          the if condition.
        ------------------------------------------------------------
        revno: 2555.937.197
        committer: Annamalai Gurusami <annamalai.gurusami@oracle.com>
        branch nick: mysql-5.1
        timestamp: Wed 2012-07-25 13:51:39 +0530
        message:
          Bug #13113026 INFORMATION_SCHEMA.INNODB_BUFFER_PAGE_LRUFROM 5.6 BACKPORT
          
          Backporting the WL#5716, "Information schema table for InnoDB 
          buffer pool information". Backporting revisions 2876.244.113, 
          2876.244.102 from mysql-trunk.
          
          rb://1175 approved by Jimmy Yang. 
        ------------------------------------------------------------
        revno: 2555.937.196
        committer: Alexander Barkov <alexander.barkov@oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-07-24 09:27:00 +0400
        message:
          Fixing wrong copyright. Index.xml was modified in 2005,
          while the copyright notice still mentioned 2003.
        ------------------------------------------------------------
        revno: 2555.937.195
        committer: Bjorn Munch <bjorn.munch@oracle.com>
        branch nick: imct-51
        timestamp: Thu 2012-07-19 15:55:41 +0200
        message:
          Reverting broken configure/make stuff
        ------------------------------------------------------------
        revno: 2555.937.194
        committer: Bjorn Munch <bjorn.munch@oracle.com>
        branch nick: imct-51
        timestamp: Thu 2012-07-19 12:57:36 +0200
        message:
          Bug #14035452 - MODULARIZE MYSQL_CLIENT_TEST
            Added new minimal client using same framework
            Added internal test using it
            Small changes to top level make/configure/cmake to have it built
        ------------------------------------------------------------
        revno: 2555.937.193
        committer: Venkata Sidagam <venkata.sidagam@oracle.com>
        branch nick: mysql-5.1-13955256
        timestamp: Thu 2012-07-19 13:52:34 +0530
        message:
          Bug #12615411 - server side help doesn't work as first statement
          
          Problem description:
          Giving "help 'contents'" in the mysql client as a first statement
          gives error
          
          Analysis:
          In com_server_help() function the "server_cmd" variable was
          initialised with buffer->ptr(). And the "server_cmd" variable is not
          updated since we are passing "'contents'"(with single quote) so the
          buffer->ptr() consists of the previous buffer values and it was sent
          to the mysql_real_query() hence we are getting error.
          
          Fix:
          We are not initialising the "server_cmd" variable and we are updating
          the variable with "server_cmd= cmd_buf" in any of the case i.e with
          single quote or without single quote for the contents.
          As part of error message improvement, added new error message in case
          of "help 'contents'".
        ------------------------------------------------------------
        revno: 2555.937.192
        committer: Chaithra Gopalareddy <chaithra.gopalareddy@oracle.com>
        branch nick: mysql-5.1
        timestamp: Wed 2012-07-18 14:36:08 +0530
        message:
          Bug#11762052: 54599: BUG IN QUERY PLANNER ON QUERIES WITH
                               "ORDER BY" AND "LIMIT BY" CLAUSE
          
          PROBLEM:
          When a 'limit' clause is specified in a query along with
          group by and order by, optimizer chooses wrong index
          there by examining more number of rows than required.
          However without the 'limit' clause, optimizer chooses
          the right index.
          
          ANALYSIS:
          With respect to the query specified, range optimizer chooses
          the first index as there is a range present ( on 'a'). Optimizer
          then checks for an index which would give records in sorted
          order for the 'group by' clause.
          
          While checking chooses the second index (on 'c,b,a') based on
          the 'limit' specified and the selectivity of
          'quick_condition_rows' (number of rows present in the range)
          in 'test_if_skip_sort_order' function. 
          But, it fails to consider that an order by clause on a
          different column will result in scanning the entire index and 
          hence the estimated number of rows calculated above are 
          wrong (which results in choosing the second index).
          
          FIX:
          Do not enforce the 'limit' clause in the call to
          'test_if_skip_sort_order' if we are creating a temporary
          table. Creation of temporary table indicates that there would be
          more post-processing and hence will need all the rows.
          
          This fix is backported from 5.6. This problem is fixed in 5.6 as   
          part of changes for work log #5558
        ------------------------------------------------------------
        revno: 2555.937.191
        committer: Annamalai Gurusami <annamalai.gurusami@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-07-12 16:42:07 +0530
        message:
          Bug #11765218 58157: INNODB LOCKS AN UNMATCHED ROW EVEN THOUGH USING
          RBR AND RC
          
          Description: When scanning and locking rows with < or <=, InnoDB locks
          the next row even though row based binary logging and read committed
          is used.
          
          Solution: In the handler, when the row is identified to fall outside
          of the range (as specified in the query predicates), then request the
          storage engine to unlock the row (if possible). This is done in
          handler::read_range_first() and handler::read_range_next().
        ------------------------------------------------------------
        revno: 2555.937.190
        author: bjorn.munch@oracle.com
        committer: Bjorn Munch <bjorn.munch@oracle.com>
        branch nick: mysql-5.1
        timestamp: Wed 2012-07-11 15:18:34 +0200
        message:
          Raise version number after cloning 5.1.65
    ------------------------------------------------------------
    revno: 2585.188.14 [merge]
    committer: Martin Skold <Martin.Skold@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Mon 2012-10-22 10:54:46 +0200
    message:
      Merged in 5.1.65
        ------------------------------------------------------------
        revno: 2555.937.189
        tags: clone-5.1.65-build
        committer: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
        branch nick: Bug11762670_5.1
        timestamp: Tue 2012-07-10 18:55:07 +0530
        message:
          follow up patch for test script failure for BUG#11762670
        ------------------------------------------------------------
        revno: 2555.937.188 [merge]
        committer: Andrei Elkin <andrei.elkin@oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-07-10 13:51:50 +0300
        message:
          merge from  5.1 repo.
            ------------------------------------------------------------
            revno: 2555.964.6
            committer: Bjorn Munch <bjorn.munch@oracle.com>
            branch nick: break-51
            timestamp: Tue 2012-07-10 11:57:24 +0200
            message:
              mysql_client_fw.c was not included in make dist
        ------------------------------------------------------------
        revno: 2555.937.187 [merge]
        committer: Andrei Elkin <andrei.elkin@oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-07-10 13:00:03 +0300
        message:
          merge from  5.1 repo.
            ------------------------------------------------------------
            revno: 2555.964.5
            committer: Sujatha Sivakumar <sujatha.sivakumar@oracle.com>
            branch nick: Bug11762670_5.1
            timestamp: Tue 2012-07-10 14:23:17 +0530
            message:
              BUG#11762670:MY_B_WRITE RETURN VALUE IGNORED
              
              Problem:
              =======
              The return value from my_b_write is ignored by: `my_b_write_quoted',
              `my_b_write_bit',`Query_log_event::print_query_header'
              
              Most callers of `my_b_printf' ignore the return value. `log_event.cc' 
              has many calls to it. 
              
              Analysis:
              ========
              `my_b_write' is used to write data into a file. If the write fails it
              sets appropriate error number and error message through my_error()
              function call and sets the IO_CACHE::error == -1.
              `my_b_printf' function is also used to write data into a file, it
              internally invokes my_b_write to do the write operation. Upon
              success it returns number of characters written to file and on error
              it returns -1 and sets the error through my_error() and also sets
              IO_CACHE::error == -1.  Most of the event specific print functions
              for example `Create_file_log_event::print', `Execute_load_log_event::print'
              etc are the ones which make several calls to the above two functions and
              they do not check for the return value after the 'print' call. All the above 
              mentioned abuse cases deal with the client side.
              
              Fix:
              ===
              As part of bug fix a check for IO_CACHE::error == -1 has been added at 
              a very high level after the call to the 'print' function.  There are 
              few more places where the return value of "my_b_write" is ignored
              those are mentioned below.
              
              +++ mysys/mf_iocache2.c    2012-06-04 07:03:15 +0000
              @@ -430,7 +430,8 @@
                         memset(buffz, '0', minimum_width - length2);
                       else
                         memset(buffz, ' ', minimum_width - length2);
              -        my_b_write(info, buffz, minimum_width - length2);
              
              +++ sql/log.cc	2012-06-08 09:04:46 +0000
              @@ -2388,7 +2388,12 @@
                   {
                     end= strxmov(buff, "# administrator command: ", NullS);
                     buff_len= (ulong) (end - buff);
              -      my_b_write(&log_file, (uchar*) buff, buff_len);
              
              At these places appropriate return value handlers have been added.
        ------------------------------------------------------------
        revno: 2555.937.186 [merge]
        committer: Andrei Elkin <andrei.elkin@oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-07-10 12:48:23 +0300
        message:
          merge from  5.1 repo.
            ------------------------------------------------------------
            revno: 2555.964.4
            committer: Bjorn Munch <bjorn.munch@oracle.com>
            branch nick: break-51
            timestamp: Tue 2012-07-10 10:04:57 +0200
            message:
              mysql_client_test did not build within limbysqld/examples
            ------------------------------------------------------------
            revno: 2555.964.3
            committer: Bjorn Munch <bjorn.munch@oracle.com>
            branch nick: grr-51
            timestamp: Mon 2012-07-09 16:36:50 +0200
            message:
              Fixed compile error in mysql_client_test using gcc
            ------------------------------------------------------------
            revno: 2555.964.2
            committer: Bjorn Munch <bjorn.munch@oracle.com>
            branch nick: rfmct-51
            timestamp: Mon 2012-07-09 15:10:07 +0200
            message:
              Refactor mysql_client_test.c into a framework part and a test part
            ------------------------------------------------------------
            revno: 2555.964.1
            committer: Georgi Kodinov <Georgi.Kodinov@Oracle.com>
            branch nick: B13889741-5.1
            timestamp: Thu 2012-07-05 13:41:16 +0300
            message:
              Bug #13889741: HANDLE_FATAL_SIGNAL IN _DB_ENTER_ |
              HANDLE_FATAL_SIGNAL IN STRNLEN
              
              Fixed the following bounds checking problems :
              1. in check_if_legal_filename() make sure the null terminated
              string is long enough before accessing the bytes in it.
              Prevents pottential read-past-buffer-end
              2. in my_wc_mb_filename() of the filename charset check
              for the end of the destination buffer before sending single
              byte characters into it.
              Prevents write-past-end-of-buffer (and garbaling stack in
              the cases reported here) errors.
              
              Added test cases.
        ------------------------------------------------------------
        revno: 2555.937.185
        committer: Andrei Elkin <andrei.elkin@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-07-05 14:37:48 +0300
        message:
          Bug#14275000
          
          Fixes for BUG11761686 left a flaw that managed to slip away from testing.
          Only effective filtering branch was actually tested with a regression test
          added to rpl_filter_tables_not_exist.
          The reason of the failure is destuction of too early mem-root-allocated memory 
          at the end of the deferred User-var's do_apply_event().
          
          Fixed with bypassing free_root() in the deferred execution branch.
          Deallocation of created in do_apply_event() items is done by the base code
          through THD::cleanup_after_query() -> free_items() that the parent Query
          can't miss.
        ------------------------------------------------------------
        revno: 2555.937.184
        committer: Rohit Kalhans <rohit.kalhans@oracle.com>
        branch nick: mysql-5.1_b11762667
        timestamp: Tue 2012-07-03 18:00:21 +0530
        message:
          BUG#11762667:MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT
          
          This is a followup patch for the bug enabling the test
          i_binlog.binlog_mysqlbinlog_file_write.test
          this was disabled in mysql trunk and mysql 5.5 as in the release
          build mysqlbinlog was not debug compiled whereas the mysqld was.
          Since have_debug.inc script checks only for mysqld to be debug
          compiled, the test was not being skipped on release builds.
          
          We resolve this problem by creating a new inc file 
          mysqlbinlog_have_debug.inc which checks exclusively for mysqlbinlog
          to be debug compiled. if not it skips the test.
           
        ------------------------------------------------------------
        revno: 2555.937.183
        committer: Gleb Shchepa <gleb.shchepa@oracle.com>
        branch nick: 5.1
        timestamp: Fri 2012-06-29 18:24:43 +0400
        message:
          minor update to make MSVS happy
        ------------------------------------------------------------
        revno: 2555.937.182
        committer: Georgi Kodinov <Georgi.Kodinov@Oracle.com>
        branch nick: B13708485-5.1
        timestamp: Thu 2012-06-28 18:38:55 +0300
        message:
          Bug #13708485:  malformed resultset packet crashes client
          
          Several fixes :
          
          * sql-common/client.c
          Added a validity check of the fields metadata packet sent 
          by the server.
          Now libmysql will check if the length of the data sent by
          the server matches what's expected by the protocol before
          using the data.
          
          * client/mysqltest.cc
          Fixed the error handling code in mysqltest to avoid sending
          new commands when the reading the result set failed (and 
          there are unread data in the pipe).
          
          * sql_common.h + libmysql/libmysql.c + sql-common/client.c
          unpack_fields() now generates a proper error when it fails.
          Added a new argument to this function to support the error 
          generation.
          
          * sql/protocol.cc
          Added a debug trigger to cause the server to send a NULL
          insted of the packet expected by the client for testing 
          purposes.
        ------------------------------------------------------------
        revno: 2555.937.181
        committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
        branch nick: mysql-5.1-test
        timestamp: Fri 2012-06-29 13:25:57 +0200
        message:
          Bug#14238406 NEW COMPILATION WARNINGS WITH GCC 4.7 (-WERROR=NARROWING)
          
          This patch fixes various compilation warnings of the type
          "error: narrowing conversion of 'x' from 'datatype1' to
          'datatype2'
        ------------------------------------------------------------
        revno: 2555.937.180
        committer: Gleb Shchepa <gleb.shchepa@oracle.com>
        branch nick: 5.1
        timestamp: Fri 2012-06-29 12:55:45 +0400
        message:
          Backport of the deprecation warning from WL#6219: "Deprecate and remove YEAR(2) type"
          
          Print the warning(note):
          
           YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead
          
          on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
        ------------------------------------------------------------
        revno: 2555.937.179 [merge]
        committer: Norvald H. Ryeng <norvald.ryeng@oracle.com>
        branch nick: mysql-5.1-merge
        timestamp: Thu 2012-06-28 14:34:49 +0200
        message:
          Merge.
            ------------------------------------------------------------
            revno: 2555.963.1
            committer: Norvald H. Ryeng <norvald.ryeng@oracle.com>
            branch nick: mysql-5.1-13003736
            timestamp: Mon 2012-06-18 09:20:12 +0200
            message:
              Bug#13003736 CRASH IN ITEM_REF::WALK WITH SUBQUERIES
              
              Problem: Some queries with subqueries and a HAVING clause that
              consists only of a column not in the select or grouping lists causes
              the server to crash.
              
              During parsing, an Item_ref is constructed for the HAVING column. The
              name of the column is resolved when JOIN::prepare calls fix_fields()
              on its having clause. Since the column is not mentioned in the select
              or grouping lists, a ref pointer is not found and a new Item_field is
              created instead. The Item_ref is replaced by the Item_field in the
              tree of HAVING clauses. Since the tree consists only of this item, the
              pointer that is updated is JOIN::having. However,
              st_select_lex::having still points to the Item_ref as the root of the
              tree of HAVING clauses.
              
              The bug is triggered when doing filesort for create_sort_index(). When
              find_all_keys() calls select->cond->walk() it eventually reaches
              Item_subselect::walk() where it continues to walk the having clauses
              from lex->having. This means that it finds the Item_ref instead of the
              new Item_field, and Item_ref::walk() tries to dereference the ref
              pointer, which is still null.
              
              The crash is reproducible only in 5.5, but the problem lies latent in
              5.1 and trunk as well.
              
              Fix: After calling fix_fields on the having clause in JOIN::prepare(),
              set select_lex::having to point to the same item as JOIN::having.
              
              This patch also fixes a bug in 5.1 and 5.5 that is triggered if the
              query is executed as a prepared statement. The Item_field is created
              in the runtime arena when the query is prepared, and the pointer to
              the item is saved by st_select_lex::fix_prepare_information() and
              brought back as a dangling pointer when the query is executed, after
              the runtime arena has been reclaimed.
              
              Fix: Backport fix from trunk that switches to the permanent arena
              before calling Item_ref::fix_fields() in JOIN::prepare().
        ------------------------------------------------------------
        revno: 2555.937.178
        committer: Harin Vadodaria<harin.vadodaria@oracle.com>
        branch nick: 51_bug11753779
        timestamp: Tue 2012-06-19 12:56:40 +0530
        message:
          Bug#11753779: MAX_CONNECT_ERRORS WORKS ONLY WHEN 1ST
                        INC_HOST_ERRORS() IS CALLED.
          
          Description: Reverting patch 3755 for bug#11753779
        ------------------------------------------------------------
        revno: 2555.937.177
        author: kent.boortz@oracle.com
        committer: Kent Boortz <kent.boortz@oracle.com>
        branch nick: mysql-5.1
        timestamp: Fri 2012-06-15 13:31:27 +0200
        message:
          Raise version number after cloning 5.1.64
    ------------------------------------------------------------
    revno: 2585.188.13 [merge]
    committer: Martin Skold <Martin.Skold@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Mon 2012-10-22 10:20:43 +0200
    message:
      Merged in 5.1.64
        ------------------------------------------------------------
        revno: 2555.937.176
        tags: clone-5.1.64-build
        committer: sayantan.dutta@oracle.com
        branch nick: mysql-5.1
        timestamp: Thu 2012-06-14 17:07:49 +0530
        message:
          BUG #13946716: FEDERATED_PLUGIN TEST CASE FAIL ON 64BIT ARCHITECTURES
        ------------------------------------------------------------
        revno: 2555.937.175
        committer: Harin Vadodaria<harin.vadodaria@oracle.com>
        branch nick: 51_bug11753779
        timestamp: Wed 2012-06-13 16:03:58 +0530
        message:
          Bug#11753779: MAX_CONNECT_ERRORS WORKS ONLY WHEN 1ST
                        INC_HOST_ERRORS() IS CALLED.
          
          Issue       : Sequence of calling inc_host_errors()
                        and reset_host_errors() required some
                        changes in order to maintain correct
                        connection error count.
          
          Solution    : Call to reset_host_errors() is shifted
                        to a location after which no calls to
                        inc_host_errors() are made.
        ------------------------------------------------------------
        revno: 2555.937.174 [merge]
        committer: Manish Kumar<manish.4.kumar@oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-06-12 12:59:13 +0530
        message:
          BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
          
          Problem
          ========
                      
          Replication breaks in the cases if the event length exceeds 
          the size of master Dump thread's max_allowed_packet.
                        
          The reason why this failure is occuring is because the event length is
          more than the total size of the max_allowed_packet, on addition of the  
          max_event_header length exceeds the max_allowed_packet of the DUMP thread.
          This causes the Dump thread to break replication and throw an error.
                                
          That can happen e.g with row-based replication in Update_rows event.
                      
          Fix
          ====
                    
          The problem is fixed in 2 steps:
          
          1.) The Dump thread limit to read event is increased to the upper limit
              i.e. Dump thread reads whatever gets logged in the binary log.
          
          2.) On the slave side we increase the the max_allowed_packet for the
              slave's threads (IO/SQL) by increasing it to 1GB.
          
              This is done using the new server option (slave_max_allowed_packet)
              included, is used to regulate the max_allowed_packet of the  
              slave thread (IO/SQL) by the DBA, and facilitates the sending of
              large packets from the master to the slave.
          
              This causes the large packets to be received by the slave and apply
              it successfully.
            ------------------------------------------------------------
            revno: 2555.962.1
            committer: Manish Kumar<manish.4.kumar@oracle.com>
            branch nick: mysql-5.1
            timestamp: Mon 2012-05-21 12:57:39 +0530
            message:
              BUG#12400221 - 60926: BINARY LOG EVENTS LARGER THAN MAX_ALLOWED_PACKET
              
              Problem
              ========
                          
              SQL statements close to the size of max_allowed_packet produce binary
              log events larger than max_allowed_packet.
                            
              The reason why this failure is occuring is because the event length is
              more than the total size of the max_allowed_packet + max_event_header
              length. Now since the event length exceeds this size master Dump
              thread is unable to send the packet on to the slave.
                                    
              That can happen e.g with row-based replication in Update_rows event.
                          
              Fix
              ====
                        
              The problem was fixed by increasing the max_allowed_packet for the
              slave's threads (IO/SQL) by increasing it to 1GB.
              This is done using the new server option included which is used to
              regulate the max_allowed_packet of the slave thread (IO/SQL).
              This causes the large packets to be received by the slave and apply
              it successfully.
        ------------------------------------------------------------
        revno: 2555.937.173
        committer: Tor Didriksen <tor.didriksen@oracle.com>
        branch nick: 5.1
        timestamp: Tue 2012-06-05 15:53:39 +0200
        message:
          Bug#14051002 VALGRIND: CONDITIONAL JUMP OR MOVE IN RR_CMP / MY_QSORT
          
          Patch for 5.1 and 5.5: fix typo in byte comparison in rr_cmp()
        ------------------------------------------------------------
        revno: 2555.937.172
        committer: Annamalai Gurusami <annamalai.gurusami@oracle.com>
        branch nick: mysql-5.1
        timestamp: Fri 2012-06-01 14:12:57 +0530
        message:
          Bug #13933132: [ERROR] GOT ERROR -1 WHEN READING TABLE APPEARED
          WHEN KILLING
          
          Suppose there is a query waiting for a lock.  If the user kills
          this query, then "Got error -1 when reading table" error message
          must not be logged in the server log file.  Since this is a user
          requested interruption, no spurious error message must be logged
          in the server log.  This patch will remove the error message from
          the log.
          
          approved by joh and tatjana
        ------------------------------------------------------------
        revno: 2555.937.171
        committer: Rohit Kalhans <rohit.kalhans@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-05-31 22:28:18 +0530
        message:
          Fixing the accidental incusion of i_binlog.binlog_suppress_info test. 
          Fix for i_binlog.binlog_mysqlbinlog_file_write failure on pb2 
        ------------------------------------------------------------
        revno: 2555.937.170
        committer: Rohit Kalhans <rohit.kalhans@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-05-31 14:32:29 +0530
        message:
          Fixed the problem in bzr file-id between 5.1 and 5.5 in i_binlog folder.
        ------------------------------------------------------------
        revno: 2555.937.169
        committer: Rohit Kalhans <rohit.kalhans@oracle.com>
        branch nick: mysql-5.1_b11762667
        timestamp: Wed 2012-05-30 14:00:29 +0530
        message:
          Fixing i_binlog.binlog_mysqlbinlog_file_write failure. 
        ------------------------------------------------------------
        revno: 2555.937.168
        committer: Rohit Kalhans <rohit.kalhans@oracle.com>
        branch nick: mysql-5.1_b11762667
        timestamp: Wed 2012-05-30 13:54:15 +0530
        message:
          Fixing the build failure on Windows debug build.
        ------------------------------------------------------------
        revno: 2555.937.167
        committer: Rohit Kalhans <rohit.kalhans@oracle.com>
        branch nick: mysql-5.1_b11762667
        timestamp: Tue 2012-05-29 12:11:30 +0530
        message:
          Bug#11762667: MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT
          
          Problem: mysqlbinlog exits without any error code in case of
          file write error. It is because of the fact that the calls
          to Log_event::print() method does not return a value and the
          thus any error were being ignored.
          
          Resolution: We resolve this problem by checking for the 
          IO_CACHE::error == -1 after every call to Log_event:: print()
          and terminating the further execution.
        ------------------------------------------------------------
        revno: 2555.937.166
        committer: Inaam Rana <inaam.rana@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-05-24 12:37:03 -0400
        message:
          Bug #14100254 65389: MVCC IS BROKEN WITH IMPLICIT LOCK
          
          rb://1088
          approved by: Marko Makela
          
          This bug was introduced in early stages of plugin. We were not
          checking for an implicit lock on sec index rec for trx_id that is
          stamped on current version of the clust_index in case where the
          clust_index has a previous delete marked version.
        ------------------------------------------------------------
        revno: 2555.937.165
        committer: Annamalai Gurusami <annamalai.gurusami@oracle.com>
        branch nick: mysql-5.1
        timestamp: Mon 2012-05-21 17:25:40 +0530
        message:
          Bug #12752572 61579: REPLICATION FAILURE WHILE
          INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER
          
          When an insert stmt like "insert into t values (1),(2),(3)" is
          executed, the autoincrement values assigned to these three rows are
          expected to be contiguous.  In the given lock mode
          (innodb_autoinc_lock_mode=1), the auto inc lock will be released
          before the end of the statement.  So to make the autoincrement
          contiguous for a given statement, we need to reserve the auto inc
          values at the beginning of the statement.  
          
          Modified the fix based on review comment by Svoj.  
        ------------------------------------------------------------
        revno: 2555.937.164
        committer: Rohit Kalhans <rohit.kalhans@oracle.com>
        branch nick: mysql-5.1
        timestamp: Fri 2012-05-18 14:44:40 +0530
        message:
          BUG#14005409 - 64624
                
          Problem: After the fix for Bug#12589870, a new field that
          stores the length of db name was added in the buffer that
          stores the query to be executed. Unlike for the plain user
          session, the replication execution did not allocate the
          necessary chunk in Query-event constructor. This caused an
          invalid read while accessing this field.
                
          Solution: We fix this problem by allocating a necessary chunk
          in the buffer created in the Query_log_event::Query_log_event()
          and store the length of database name.
        ------------------------------------------------------------
        revno: 2555.937.163
        committer: Gopal Shankar <gopal.shankar@oracle.com>
        branch nick: thdctxdeadlock-51
        timestamp: Thu 2012-05-17 18:07:59 +0530
        message:
          Bug#12636001 : deadlock from thd_security_context
          
          PROBLEM:
          Threads end-up in deadlock due to locks acquired as described
          below,
          
          con1: Run Query on a table. 
            It is important that this SELECT must back-off while
            trying to open the t1 and enter into wait_for_condition().
            The SELECT then is blocked trying to lock mysys_var->mutex
            which is held by con3. The very significant fact here is
            that mysys_var->current_mutex will still point to LOCK_open,
            even if LOCK_open is no longer held by con1 at this point.
          
          con2: Try dropping table used in con1 or query some table.
            It will hold LOCK_open and be blocked trying to lock
            kernel_mutex held by con4.
          
          con3: Try killing the query run by con1.
            It will hold THD::LOCK_thd_data belonging to con1 while
            trying to lock mysys_var->current_mutex belonging to con1.
            But current_mutex will point to LOCK_open which is held
            by con2.
          
          con4: Get innodb engine status
            It will hold kernel_mutex, trying to lock THD::LOCK_thd_data
            belonging to con1 which is held by con3.
          
          So while technically only con2, con3 and con4 participate in the
          deadlock, con1's mysys_var->current_mutex pointing to LOCK_open
          is a vital component of the deadlock.
          
          CYCLE = (THD::LOCK_thd_data -> LOCK_open ->
                   kernel_mutex -> THD::LOCK_thd_data)
          
          FIX:
          LOCK_thd_data has responsibility of protecting,
          1) thd->query, thd->query_length
          2) VIO
          3) thd->mysys_var (used by KILL statement and shutdown)
          4) THD during thread delete.
          
          Among above responsibilities, 1), 2)and (3,4) seems to be three
          independent group of responsibility. If there is different LOCK
          owning responsibility of (3,4), the above mentioned deadlock cycle
          can be avoid. This fix introduces LOCK_thd_kill to handle
          responsibility (3,4), which eliminates the deadlock issue.
          
          Note: The problem is not found in 5.5. Introduction MDL subsystem 
          caused metadata locking responsibility to be moved from TDC/TC to
          MDL subsystem. Due to this, responsibility of LOCK_open is reduced. 
          As the use of LOCK_open is removed in open_table() and 
          mysql_rm_table() the above mentioned CYCLE does not form.
          Revision ID for changes,
          open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz
          mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
        ------------------------------------------------------------
        revno: 2555.937.162
        committer: Nuno Carvalho <nuno.carvalho@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-05-17 11:41:46 +0100
        message:
          Added combinations file to i_rpl suite.
        ------------------------------------------------------------
        revno: 2555.937.161
        committer: Annamalai Gurusami <annamalai.gurusami@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-05-17 10:15:54 +0530
        message:
          Fixing a pb2 test case.  All debug_sync test cases
          must include have_debug_sync.inc.  
        ------------------------------------------------------------
        revno: 2555.937.160
        committer: Annamalai Gurusami <annamalai.gurusami@oracle.com>
        branch nick: mysql-5.1
        timestamp: Wed 2012-05-16 16:36:49 +0530
        message:
          Bug #13943231: ALTER TABLE AFTER DISCARD MAY CRASH THE SERVER
          
          The following scenario crashes our mysql server:
          
          1.  set global innodb_file_per_table=1;
          2.  create table t1(c1 int) engine=innodb;
          3.  alter table t1 discard tablespace;
          4.  alter table t1 add unique index(c1);
          
          Step 4 crashes the server.  This patch introduces a check on discarded
          tablespace to avoid the crash.
          
          rb://1041 approved by Marko Makela
        ------------------------------------------------------------
        revno: 2555.937.159
        committer: Venkata Sidagam <venkata.sidagam@oracle.com>
        branch nick: mysql-5.1-13955256
        timestamp: Wed 2012-05-16 16:14:27 +0530
        message:
          Bug #13955256: KEYCACHE CRASHES, CORRUPTIONS/HANGS WITH, 
                         FULLTEXT INDEX AND CONCURRENT DML.
          
          Problem Statement:
          ------------------
          1) Create a table with FT index.
          2) Enable concurrent inserts.
          3) In multiple threads do below operations repeatedly
             a) truncate table
             b) insert into table ....
             c) select ... match .. against .. non-boolean/boolean mode
          
          After some time we could observe two different assert core dumps
          
          Analysis:
          --------
          1)assert core dump at key_read_cache():
          Two select threads operating in-parallel on same key 
          root block.
          1st select thread block->status is set to BLOCK_ERROR 
          because the my_pread() in read_block() is returning '0'. 
          Truncate table made the index file size as 1024 and pread 
          was asked to get the block of count bytes(1024 bytes) 
          from offset of 1024 which it cannot read since its 
          "end of file" and retuning '0' setting 
          "my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, 
          key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR 
          the 1st select thread enter into the free_block() and will 
          be under wait on conditional mutex by making status as 
          BLOCK_REASSIGNED and goes for wait_on_readers(). Other select 
          thread will also work on the same block and sees the status as 
          BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED 
          and asserting the server.
          
          2)assert core dump at key_write_cache():
          One select thread and One insert thread.
          Select thread gets the unlocks the 'keycache->cache_lock', 
          which allows other threads to continue and gets the pread() 
          return value as'0'(please see the explanation above) and 
          tries to get the lock on 'keycache->cache_lock' and waits 
          there for the lock.
          Insert thread requests for the block, block will be assigned 
          from the hash list and makes the page_status as 
          'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits 
          in the queue since there are some other threads performing 
          reads on the same block.
          Select thread which was waiting for the 'keycache->cache_lock' 
          mutex in the read_block() will continue after getting the my_pread() 
          value as '0' and sets the block status as BLOCK_ERROR and goes to 
          the free_block() and go to the wait_for_readers().
          Now the insert thread will awake and continues. and checks 
          block->status as not BLOCK_READ and it asserts.  
          
          Fix:
          ---
          In the full text code, multiple readers of index file is not guarded. 
          Hence added below below code in _ft2_search() and walk_and_match().
          
          to lock the key_root I have used below code in _ft2_search()
           if (info->s->concurrent_insert)
              mysql_rwlock_rdlock(&share->key_root_lock[0]);
          
          and to unlock 
           if (info->s->concurrent_insert)
             mysql_rwlock_unlock(&share->key_root_lock[0]);
        ------------------------------------------------------------
        revno: 2555.937.158
        committer: Annamalai Gurusami <annamalai.gurusami@oracle.com>
        branch nick: mysql-5.1
        timestamp: Wed 2012-05-16 11:17:48 +0530
        message:
          Bug #12752572 61579: REPLICATION FAILURE WHILE
          INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER
          
          When an insert stmt like "insert into t values (1),(2),(3)" is
          executed, the autoincrement values assigned to these three rows are
          expected to be contiguous.  In the given lock mode
          (innodb_autoinc_lock_mode=1), the auto inc lock will be released
          before the end of the statement.  So to make the autoincrement
          contiguous for a given statement, we need to reserve the auto inc
          values at the beginning of the statement.  
          
          rb://1074 approved by Alexander Nozdrin
        ------------------------------------------------------------
        revno: 2555.937.157
        committer: Nuno Carvalho <nuno.carvalho@oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-05-15 22:06:48 +0100
        message:
          BUG#11754117 - 45670: INTVAR_EVENTS FOR FILTERED-OUT QUERY_LOG_EVENTS ARE EXECUTED
          
          Improved random number filtering verification on
          rpl_filter_tables_not_exist test.
        ------------------------------------------------------------
        revno: 2555.937.156
        committer: Marko M?kel? <marko.makela@oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-05-15 15:04:39 +0300
        message:
          Bug#14025221 FOREIGN KEY REFERENCES FREED MEMORY AFTER DROP INDEX
          
          dict_table_replace_index_in_foreign_list(): Replace the dropped index
          also in the foreign key constraints of child tables that are
          referencing this table.
          
          row_ins_check_foreign_constraint(): If the underlying index is
          missing, refuse the operation.
          
          rb:1051 approved by Jimmy Yang
        ------------------------------------------------------------
        revno: 2555.937.155
        committer: Georgi Kodinov <Georgi.Kodinov@Oracle.com>
        branch nick: B11761822-5.1
        timestamp: Tue 2012-05-15 13:12:22 +0300
        message:
          Bug #11761822: yassl rejects valid certificate which openssl accepts
              
          Applied the fix that updates yaSSL to 2.2.1 and fixes parsing this 
          particular certificate.
          Added a test case with the certificate itself.
        ------------------------------------------------------------
        revno: 2555.937.154
        committer: Bjorn Munch <bjorn.munch@oracle.com>
        branch nick: int-51
        timestamp: Tue 2012-05-15 09:14:44 +0200
        message:
          Added some extra optional path to test suites
        ------------------------------------------------------------
        revno: 2555.937.153
        committer: Annamalai Gurusami <annamalai.gurusami@oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-05-10 10:18:31 +0530
        message:
          Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED 
          BY A CONCURRENT TRANSACTIO
          
          The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
          a table handler clone. Innodb does not provide a clone operation.  
          The ha_innobase::clone() is not there. The handler::clone() does not 
          take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
          this what happens is that for one index we do a locking read, and 
          for the other index we were doing a non-locking (consistent) read. 
          The patch introduces ha_innobase::clone() member function.  
          It is implemented similar to ha_myisam::clone().  It calls the 
          base class handler::clone() and then does any additional operation 
          required.  I am setting the ha_innobase->prebuilt->select_lock_type 
          correctly. 
          
          rb://1060 approved by Marko
        ------------------------------------------------------------
        revno: 2555.937.152 [merge]
        committer: Sunanda Menon <sunanda.menon@oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-05-08 07:19:14 +0200
        message:
          Merge from mysql-5.1.63-release
        ------------------------------------------------------------
        revno: 2555.937.151
        committer: Venkata Sidagam <venkata.sidagam@oracle.com>
        branch nick: mysql-5.1-bug-45740
        timestamp: Mon 2012-05-07 16:46:44 +0530
        message:
          Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY 
                               CAUSES RESTORE PROBLEM
          Problem Statement:
          ------------------
          mysqldump is not having the dump stmts for general_log and slow_log
          tables. That is because of the fix for Bug#26121. Hence, after 
          dropping the mysql database, and applying the dump by enabling the 
          logging, "'general_log' table not found" errors are logged into the 
          server log file.
          
          Analysis:
          ---------
          As part of the fix for Bug#26121, we skipped the dumping of tables 
          for general_log and slow_log, because the data dump of those tables 
          are taking LOCKS, which is not allowed for log tables.
          
          Fix:
          ----
          We came up with an approach that instead of taking both meta data 
          and data dump information for those tables, take only the meta data 
          dump which doesn't need LOCKS.
          As part of fixing the issue we came up with below algorithm.
          Design before fix:
          1) mysql database is having tables like db, event,... general_log,
             ... slow_log...
          2) Skip general_log and slow_log while preparing the tables list
          3) Take the TL_READ lock on tables which are present in the table 
             list and do 'show create table'.
          4) Release the lock.
          
          Design with the fix:
          1) mysql database is having tables like db, event,... general_log,
             ... slow_log...
          2) Skip general_log and slow_log while preparing the tables list
          3) Explicitly call the 'show create table' for general_log and 
             slow_log
          3) Take the TL_READ lock on tables which are present in the table 
             list and do 'show create table'.
          4) Release the lock.
          
          While taking the meta data dump for general_log and slow_log the 
          "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
          This is because we skipped "DROP TABLE" for those tables, 
          "DROP TABLE" fails for these tables if logging is enabled. 
          Customer is applying the dump by enabling logging so, if the dump 
          has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
          stmts for those tables.
            
          After the fix we could observe "Table 'mysql.general_log' 
          doesn't exist" errors initially that is because in the customer 
          scenario they are dropping the mysql database by enabling the 
          logging, Hence, those errors are expected. Once we apply the 
          dump which is taken before the "drop database mysql", the errors 
          will not be there.
        ------------------------------------------------------------
        revno: 2555.937.150
        committer: Yasufumi Kinoshita <yasufumi.kinoshita@oracle.com>
        branch nick: mysql-5.1
        timestamp: Fri 2012-04-27 19:38:13 +0900
        message:
          Bug#11758510 (#50723): INNODB CHECK TABLE FATAL SEMAPHORE WAIT TIMEOUT POSSIBLY TOO SHORT FOR BI
          Fixed not to check timeout during the check table.
        ------------------------------------------------------------
        revno: 2555.937.149
        committer: irana <irana@dscczz01.us.oracle.com>
        branch nick: mysql-5.1
        timestamp: Thu 2012-04-26 08:17:14 -0700
        message:
          InnoDB: Adjust error message when a dropped tablespace is accessed.
        ------------------------------------------------------------
        revno: 2555.937.148 [merge]
        committer: Andrei Elkin <andrei.elkin@oracle.com>
        branch nick: 5.1-bug11754117-45670-insert_id_gets_through
        timestamp: Mon 2012-04-23 12:05:05 +0300
        message:
          merge from 5.1 repo
            ------------------------------------------------------------
            revno: 2555.961.1
            committer: Nuno Carvalho <nuno.carvalho@oracle.com>
            branch nick: mysql-5.1
            timestamp: Fri 2012-04-20 22:25:59 +0100
            message:
              BUG#13979418: SHOW BINLOG EVENTS MAY CRASH THE SERVER
              
              The function mysql_show_binlog_events has a local stack variable
              'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
              this variable goes out of scope and is destroyed before clean
              thd->current_linfo.
              
              The problem is solved by moving 'LOG_INFO linfo;' to function scope.
        ------------------------------------------------------------
        revno: 2555.937.147
        committer: Andrei Elkin <andrei.elkin@oracle.com>
        branch nick: 5.1-bug11754117-45670-insert_id_gets_through
        timestamp: Mon 2012-04-23 11:51:19 +0300
        message:
          BUG#11754117 
          
          rpl_auto_increment_bug45679.test is refined due to not fixed in 5.1 Bug11749859-39934.
        ------------------------------------------------------------
        revno: 2555.937.146
        committer: Andrei Elkin <andrei.elkin@oracle.com>
        branch nick: 5.1-bug11754117-45670-insert_id_gets_through
        timestamp: Fri 2012-04-20 19:41:20 +0300
        message:
          BUG#11754117 incorrect logging of INSERT into auto-increment 
          BUG#11761686 insert_id event is not filtered.
            
          Two issues are covered.
            
          INSERT into autoincrement field which is not the first part in the composed primary key 
          is unsafe by autoincrement logging design. The case is specific to MyISAM engine
          because Innodb does not allow such table definition.
            
          However no warnings and row-format logging in the MIXED mode was done, and
          that is fixed.
            
          Int-, Rand-, User-var log-events were not filtered along with their parent
          query that made possible them to screw up execution context of the following
          query.
            
          Fixed with deferring their execution until the parent query.
          
          ******
          Bug#11754117 
          
          Post review fixes.
        ------------------------------------------------------------
        revno: 2555.937.145
        committer: Mayank Prasad <mayank.prasad@oracle.com>
        branch nick: show_table
        timestamp: Thu 2012-04-19 14:57:34 +0530
        message:
          BUG#12427262 : 60961: SHOW TABLES VERY SLOW WHEN NOT IN SYSTEM DISK CACHE
          
          Reason:
           This is a regression happened because of changes done in code refactoring 
           in 5.1 from 5.0.
          
          Issue: 
           While doing "Show tables" lex->verbose was being checked to avoid opening
           FRM files to get table type. In case of "Show full table", lex->verbose
           is true to indicate table type is required. In 5.0, this check was
           present which got missing in >=5.5.
          
          Fix:
           Added the required check to avoid opening FRM files unnecessarily in case
           of "Show tables".
        ------------------------------------------------------------
        revno: 2555.937.144
        committer: Tor Didriksen <tor.didriksen@oracle.com>
        branch nick: 5.1
        timestamp: Wed 2012-04-18 14:13:13 +0200
        message:
          new header file must be listed in Makefile.am
        ------------------------------------------------------------
        revno: 2555.937.143
        committer: Tor Didriksen <tor.didriksen@oracle.com>
        branch nick: 5.1
        timestamp: Wed 2012-04-18 13:14:05 +0200
        message:
          Backport 5.5=>5.1 Patch for Bug#13805127: 
          Stored program cache produces wrong result in same THD.
        ------------------------------------------------------------
        revno: 2555.937.142
        committer: Nuno Carvalho <nuno.carvalho@oracle.com>
        branch nick: mysql-5.1
        timestamp: Wed 2012-04-18 10:08:01 +0100
        message:
          WL#6236: Allow SHOW MASTER LOGS and SHOW BINARY LOGS with REPLICATION CLIENT
          
          Currently SHOW MASTER LOGS and SHOW BINARY LOGS require the SUPER
          privilege. Monitoring tools (such as MEM) often want to check this 
          output - for instance MEM generates the SUM of the sizes of the logs 
          reported here, and puts that in the Replication overview within the MEM
          Dashboard.
          However, because of the SUPER requirement, these tools often have an 
          account that holds open the connection whilst monitoring, and can lock
          out administrators when the server gets overloaded and reaches
          max_connections - there is already another SUPER privileged account
          connected, the "monitor". 
          
          As SHOW MASTER STATUS, and all other replication related statements,
          return with either REPLICATION CLIENT or SUPER privileges, this worklog 
          is to make SHOW MASTER LOGS and SHOW BINARY LOGS be consistent with this
          as well, and allow both of these commands with either SUPER or 
          REPLICATION CLIENT. 
          This allows monitoring tools to not require a SUPER privilege any more,
          so is safer in overloaded situations, as well as being more secure, as 
          lighter privileges can be given to users of such tools or scripts.
        ------------------------------------------------------------
        revno: 2555.937.141
        committer: Chaithra Gopalareddy <chaithra.gopalareddy@oracle.com>
        branch nick: mysql-5.1
        timestamp: Wed 2012-04-18 11:25:01 +0530
        message:
          Bug#12713907:STRANGE OPTIMIZE & WRONG RESULT UNDER
                             ORDER BY COUNT(*) LIMIT.
          
          PROBLEM:
          With respect to problem in the bug description, we
          exhibit different behaviors for the two tables
          presented, because innodb statistics (rec_per_key
          in this case) are updated for the first table
          and not so for the second one. As a result the
          query plan gets changed in test_if_skip_sort_order
          to use 'index' scan. Hence the difference in the
          explain output. (NOTE: We can reproduce the problem
          with first table by reducing the number of tuples
          and changing the table structure)
          
          The varied output w.r.t the query on the second table
          is because of the result in the query plan change.
          When a query plan is changed to use 'index' scan,
          after the call to test_if_skip_sort_order, we set
          keyread to TRUE immedietly. If for some reason
          we drop this index scan for a filesort later on,
          we fetch only the keys not the entire tuple.
          As a result we would see junk values in the result set.
          
          Following is the code flow:
          
          Call test_if_skip_sort_order
          -Choose an index to give sorted output
          -If this is a covering index, set_keyread to TRUE
          -Set the scan to INDEX scan
          
          Call test_if_skip_sort_order second time
          -Index is not chosen (note that we do not pass the
          actual limit value second time. Hence we do not choose
          index scan second time which in itself is a bug fixed
          in 5.6 with WL#5558)
          -goto filesort
          
          Call filesort
          -Create quick range on a different index
          -Since keyread is set to TRUE, we fetch only the columns of
          the index
          -results in the required columns are not fetched
          
          FIX:
          Remove the call to set_keyread(TRUE) from
          test_if_skip_sort_order. The access function which is
          'join_read_first' or 'join_read_last' calls set_keyread anyways.
        ------------------------------------------------------------
        revno: 2555.937.140
        committer: Georgi Kodinov <Georgi.Kodinov@Oracle.com>
        branch nick: mysql-5.1
        timestamp: Tue 2012-04-17 13:25:41 +0300
        message:
          Raise version number after cloning 5.1.63
    ------------------------------------------------------------
    revno: 2585.188.12 [merge]
    committer: Martin Skold <Martin.Skold@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Mon 2012-10-22 09:00:49 +0200
    message:
      Merged in 5.1.63
    ------------------------------------------------------------
    revno: 2585.188.11 [merge]
    committer: Martin Skold <Martin.Skold@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Mon 2012-10-22 08:37:47 +0200
    message:
      Merged in 5.1.62
------------------------------------------------------------
revno: 5007 [merge]
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Thu 2012-10-18 09:56:34 +0300
message:
  merge
    ------------------------------------------------------------
    revno: 4999.1.3
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Wed 2012-10-17 17:58:59 +0200
    message:
      Fix traditional windows compile failure caused by "variable after code"
    ------------------------------------------------------------
    revno: 4999.1.2
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Wed 2012-10-17 15:18:33 +0200
    message:
      Bug#14730537    NDB_MGMD --CONFIG-CACHE=FALSE OFTEN HANGS IN SHUTDOWN
         - Problem in mysys on certain MySQL versions where the thread id is
            used as the thread identifier. Since the thread id may be reused
            by another thread(when the current thread exits) one may end up
            waiting for the wrong thread.
        - Workaround by using a HANDLE which is opened by thread itself in
           'ndb_thread_wrapper' and subsequently used to wait for the
           thread in 'NdbThread_WaitFor'.
    ------------------------------------------------------------
    revno: 4999.1.1
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Wed 2012-10-17 14:59:41 +0200
    message:
      Backport fix for ndb_backup_rate.test to 7.0
------------------------------------------------------------
revno: 5006
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Wed 2012-10-17 16:08:34 +0300
message:
  wl#5929 sp_cleanup.diff
  remove obsolete packed methods
------------------------------------------------------------
revno: 5005
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Wed 2012-10-17 16:07:53 +0300
message:
  wl#5929 sp_marker.diff
  DBTC CommitAckMarker
------------------------------------------------------------
revno: 5004
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Wed 2012-10-17 16:07:11 +0300
message:
  wl#5929 sp_firetrig.diff
  sendFireTrigReqLqh, sendFireTrigConfTc
------------------------------------------------------------
revno: 5003
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Wed 2012-10-17 16:06:25 +0300
message:
  wl#5929 sp_keyconf.diff
  keyconf from DBLQH, DBTC
------------------------------------------------------------
revno: 5002
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Wed 2012-10-17 16:05:34 +0300
message:
  wl#5929 sp_commit.diff
  commit/committed, complete/completed
------------------------------------------------------------
revno: 5001
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Wed 2012-10-17 16:04:33 +0300
message:
  wl#5929 sp_framework.diff
  PackedWordsContainer, sendPackedSignal
------------------------------------------------------------
revno: 5000
committer: Pekka Nousiainen <pekka.nousiainen@oracle.com>
branch nick: ms-wl5929-70
timestamp: Wed 2012-10-17 13:44:50 +0300
message:
  wl#5929 sp_fix01.diff
  bug#14772503 ndb_apply_status regression
------------------------------------------------------------
revno: 4999
committer: Maitrayi Sabaratnam <maitrayi.sabaratnam@oracle.com>
branch nick: mysql-5.1-telco-7.0-bugfix
timestamp: Fri 2012-10-12 10:12:01 +0200
message:
  Bug #4671934 - NDB_CONFIG: DEFAULT VALUES MISSING FOR PARAMETERS
------------------------------------------------------------
revno: 4998
committer: magnus.blaudd@oracle.com
branch nick: 7.0
timestamp: Thu 2012-10-11 15:08:20 +0200
message:
  ndb_mgm
   - Pressing Ctrl-C on certain platforms causes NULL to be returned from
    'readline' and that should trigger a graceful exit of ndb_mgm
------------------------------------------------------------
revno: 4997 [merge]
committer: magnus.blaudd@oracle.com
branch nick: mysql-5.1-telco-7.0
timestamp: Thu 2012-10-11 12:32:22 +0200
message:
  Merge
    ------------------------------------------------------------
    revno: 4955.1.1
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Tue 2012-07-10 10:48:58 +0200
    message:
      WL#6224 Adapt MySQL Cluster to 5.6
       - more load_default -> ndb_load_default changes
       - one place still using raw my_getopt.h and my_default.h ->
          use HAVE_MY_DEFAULT_H
------------------------------------------------------------
revno: 4996
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Thu 2012-10-04 13:27:10 +0200
message:
  Refactor ERROR_INSERT code in SPJ block such that actions taken
  in composite ERROR_INSERT conditions can be recognized as unused
  code and removed by compiler when -DERROR_INSERT was not defined.
  
  Generally the ERROR_INSERT(nn) should be checked *first* in a
  '||' term in the if-conditions, else anything preceding the 
  ERROR_INSERT(n) has to be evaluated:
  
  in the condition:
  
        if (ERROR_INSERTED_CLEAR(17060) ||
            ((rand() % 7) == 0 && ERROR_INSERTED_CLEAR(17061)) ||
            ((treeNodePtr.p->isLeaf() && ERROR_INSERTED_CLEAR(17062))) ||
            ((treeNodePtr.p->m_parentPtrI != RNIL && ERROR_INSERTED_CLEAR(17063))))
  
  Both the function 'rand()' and 'isLeaf()' was called even when
  compiled wo/ -DERROR_INSERT!
  
  So the pattern in this fix is to rewrite such construct to:
  
        if (ERROR_INSERTED(17060) ||
           (ERROR_INSERTED(17061) && (treeNodePtr.p->isLeaf())) ||
           (ERROR_INSERTED(17062) && (treeNodePtr.p->m_parentPtrI != RNIL)) ||
           (ERROR_INSERTED(17063) && (rand() % 7) == 0))
        {
          jam();
          CLEAR_ERROR_INSERT_VALUE;
  
  Which could then be entirely removed when compiled wo / DERROR_INSERT.
------------------------------------------------------------
revno: 4995
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Thu 2012-10-04 09:27:22 +0200
message:
  Fix for Bug#14648712 CALLING PROGERROR WITHOUT THE THIRD ARGUMENT RESULTS IN SIGSEGV
  
  The function ndb_basename() is called from ErrorReporter::handleError() with
  'problemData' as argument. As problemData is allowed to be NULL, that used to
  crash ndb_basename() when 'strlen()' was called.
  
  This fix will check for NULL argument to ndb_basename() and then
  return NULL. The returned NULL value is then later correctly 
  handled in ::WriteMessage() and EvenLogger::info() which ::handleErrro()
  may then later call with a NULL argument.
------------------------------------------------------------
revno: 4994
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Thu 2012-10-04 09:13:01 +0200
message:
  Fix for memory leak in testSpj.cpp:
  
  HugoQueryBuilder::createQuery() takes the argument 'bool takeOwnership = false',
  where 'takeOwnerShip' decides whether the caller will be responsible (== true)
  for destruction the query object. Default behavior (==false) is that all created
  query objects will be destructed together with the HugoQueryBuilder.
  
  testSpj incorrectly supplied the 'Ndb*' as first argument to ::creatQuery().
  This was auto converted to a boolean 'true' value when 'Ndb* != NULL' - thus
  incorrectly specifying that testSpj will destruct the query objects.
  
  This fix removes the incorrect argument, such that the default 'false' value
  is used instead. 
------------------------------------------------------------
revno: 4993
committer: Frazer Clement <frazer.clement@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Sat 2012-09-29 00:36:17 +0100
message:
  Bug #14685458 NDB : REPORT BYTES SENT/RECEIVED ACROSS TRANSPORTERS
  
  This patch extends the ndbinfo.transporters table with a number of 
  new columns : 
    remote_address
    bytes_sent
    bytes_received
  
  For each transporter (point to point link between
  nodes), these columns indicate the address of the 'remote' 
  end of the link, the number of bytes sent to that node, and the
  number of bytes received from that node.
  
  The byte counts are reset on disconnect.
------------------------------------------------------------
revno: 4992 [merge]
committer: Mauritz Sundell <mauritz.sundell@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Fri 2012-09-28 15:37:33 +0200
message:
  ndb - removes the ~45Mrow limit per partition
  
  Bug #13844405 - FRAGMENTS ARE LIMITED TO ~45M ROWS. (GOT ERROR 633)
  Bug #14000373	SILENT DATA INCONSISTENCY WITH BIG FRAGMENTS (MORE THAN ~45MROWS)
  
  There have been a limit on the number of rows in a partition due to the
  limitations in the implementation of hash index used for primary key/unique key
  hash index, and the primary key hash index is mandatory for a table.
    ------------------------------------------------------------
    revno: 4991.1.11
    committer: Mauritz Sundell <mauritz.sundell@oracle.com>
    branch nick: mysql-5.1-telco-7.0-bug14000373
    timestamp: Fri 2012-09-28 15:15:29 +0200
    message:
      ndb - (refactor) change logic for expand/shrink control in dbacc
      
      change expandFlag to expandShrinkQueued
      
      The only values currently used for expandFlag was 0 and 2,
      reenable_expand_after_redo_log_exection_complete() was never
      called.
      
      So i replaced the logic with on boolean flag expandShrinkQueued
      that is set true on send of expand or shrink signal, and set
      false when signal is processed. The flag is checked before send
      of such signal and if set true, the send is suppressed, meaning
      that at most one expand or shrink signal can be queued at the
      time.
    ------------------------------------------------------------
    revno: 4991.1.10
    committer: Mauritz Sundell <mauritz.sundell@oracle.com>
    branch nick: mysql-5.1-telco-7.0-bug14000373
    timestamp: Fri 2012-09-28 15:11:17 +0200
    message:
      ndb - store cached hash value with element in hash index (dbacc working again)
      
      This patch repairs dbacc and reintroduce the cached hash value disappered in
      earlier patch 'removing cached part of hash value for element in hash index'
      with revision-id: mauritz.sundell@oracle.com-20120928130216-6lv1u6yxkh5vtkei
      
      this patch reintroduce partial storage of the elements
      hash value with the element.
      This to not always need to calculate the hash value at
      expand or lookup.
      
      With this patch atmost 15 bits of the hash value is stored
      for the element.
      
      In linear hashing the lowest bits of the hashvalue is used
      as address bits.
      
      As the expand make one more bit of hash value used as address,
      one do not need to store that bit, so we shift out one bits
      from the stored hash value for each element in both left in 
      the splitted bucket and those moved to the new top bucket.
      
      On shrink one need to restore one bit of the stored hashvalue,
      and for elements moved from the old top bucket to the new
      split bucket a one is shifted in and for the elements already
      in the split bucket a zero is shifted in.
      
      Since expand removes one stored hash value bit, fewer bits are
      available for quick compare on lookup.  To guarantee for enough
      bits after expand one recalculates the full hashvalue if less
      than MIN_HASH_COMPARE_BITS bits, and fill up with 15 valid bits
      in the stored hash bits.
    ------------------------------------------------------------
    revno: 4991.1.9
    committer: Mauritz Sundell <mauritz.sundell@oracle.com>
    branch nick: mysql-5.1-telco-7.0-bug14000373
    timestamp: Fri 2012-09-28 15:10:15 +0200
    message:
      ndb - introduce new integer type storing variable number of bits
      
      it use an underlying integer type.
      the most significant set bit marks the limit of valid bits,
      only the lower significant bits are well defined.
      The more significant bits will be unset, and they as well as
      the top set bit will be treated as having unknown value.
      
      the method match() returns true if the lower bits that is valid
      for both arguments are equal, indicating that the full values
      may be equal. Otherwise it returns false, meaning that the two
      values definitly is unequal.
      
      There are also methods for shifting out or in lower bits, and
      to inspect one bit or take out some range of lower bits.
    ------------------------------------------------------------
    revno: 4991.1.8
    committer: Mauritz Sundell <mauritz.sundell@oracle.com>
    branch nick: mysql-5.1-telco-7.0-bug14000373
    timestamp: Fri 2012-09-28 15:08:59 +0200
    message:
      ndb - (refctor) introduce bucket page methods in dbacc (dbacc still broken)
      
      refactor code using methods for finding bucket page and
      index on page instead of explicit shifting and masking
    ------------------------------------------------------------
    revno: 4991.1.7
    committer: Mauritz Sundell <mauritz.sundell@oracle.com>
    branch nick: mysql-5.1-telco-7.0-bug14000373
    timestamp: Fri 2012-09-28 15:07:05 +0200
    message:
      ndb - use lhlevel class in dbacc (dbacc still broken)
      
      replaces the linear hashing related members of fragment record
      with member of new lhlevel class.
    ------------------------------------------------------------
    revno: 4991.1.6
    committer: Mauritz Sundell <mauritz.sundell@oracle.com>
    branch nick: mysql-5.1-telco-7.0-bug14000373
    timestamp: Fri 2012-09-28 15:02:16 +0200
    message:
      ndb - (refactor) removing cached part of hash value for element in hash index - BROKEN!  
      
      remove use of storing some hash bits with element
      
      Note, this patch degrades performance significantly
      and should not be pushed alone but together with
      later patch in patch set, reintroducing storing bits
      from hash.
      
      The stored hash bits was used on expand and on lookup
      to quickly discard non matching elements in same bucket.
      
      Now the hash is recalculated if needed or element matching
      always compares element and key by values.
    ------------------------------------------------------------
    revno: 4991.1.5
    committer: Mauritz Sundell <mauritz.sundell@oracle.com>
    branch nick: mysql-5.1-telco-7.0-bug14000373
    timestamp: Fri 2012-09-28 14:58:59 +0200
    message:
      ndb - (test) testcase for bug 14000373, reaching the ~45Mrow limit for one partition
      
      using error_insert 3003 to lower the limit to under 5 million rows for one partition.
    ------------------------------------------------------------
    revno: 4991.1.4
    committer: Mauritz Sundell <mauritz.sundell@oracle.com>
    branch nick: mysql-5.1-telco-7.0-bug14000373
    timestamp: Fri 2012-09-28 14:55:26 +0200
    message:
      ndb - introduce class LHLevel for level handling in linear hashing
      
      only handles the numbering of buckets, no handling of the bucket them selfes.
      
      bucket_number() translates hash to bucket_number
      
      expand_buckets() gives the bucket numbers for buckets involved in next expand
      expand() adjusts its state supporting and increment of the size by one
      
      shrink_buckets() gives the bucket numbers for buckets involved in next shrink
      shrink adjusts() its state supporting and decrement of the size by one
    ------------------------------------------------------------
    revno: 4991.1.3
    committer: Mauritz Sundell <mauritz.sundell@oracle.com>
    branch nick: mysql-5.1-telco-7.0-bug14000373
    timestamp: Fri 2012-09-28 14:52:21 +0200
    message:
      ndb - (refactor) remove unused member lhfragbits in dbacc
      
      remove senseless use of lhfragbits
      
      the use of lhfragbits to skip bits in hash value in dbacc
      are removed. this was probably some legacy code. now
      fragment are hashed from second word in the elements md5 sum.
      
      more cleanup is possible, also in signals but i do not
      do it in this patch.
    ------------------------------------------------------------
    revno: 4991.1.2
    committer: Mauritz Sundell <mauritz.sundell@oracle.com>
    branch nick: mysql-5.1-telco-7.0-bug14000373
    timestamp: Fri 2012-09-28 14:38:38 +0200
    message:
      ndb - cleanup internal signals for shrink/expand primary key hashtable
      
      Cleanup use of signals for shrink/expand in dbacc
      
      The shrink/expand signals sent from lqh are not used - removed.
      
      The parameters p and maxp are not used from signal - removed.
    ------------------------------------------------------------
    revno: 4991.1.1
    committer: Mauritz Sundell <mauritz.sundell@oracle.com>
    branch nick: mysql-5.1-telco-7.0-bug14000373
    timestamp: Fri 2012-09-28 14:28:53 +0200
    message:
      ndb - use 64bit for slack variables in dbacc
      
      in the hash table for primary key there is a slack
      variable to keep track on over- and underuse of hash
      table memory in words.
      
      the slack variables are 32bit and will not be enough
      in future since in theory one can have 2^37 buckets,
      each with atleast 28 words. so i makes it 64bit now.
------------------------------------------------------------
revno: 4991
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Tue 2012-09-25 13:27:32 +0200
message:
  Fix for bug#14550056 BUSY WAIT IN DBTC::REMOVEMARKERFORFAILEDAPI IF CLIENT DISCONNECT WITH OPEN TXNS
  
  In case Dbtc::removeMarkerForFailedAPI() waits for open transactions to be terminated,
  it should do that with a 'sendSignalWithDelay(..., GSN_CONTINUEB, 1ms, ...)', instead of
  a plain 'sendSignal()' which will cause a busy loop.
------------------------------------------------------------
revno: 4990
committer: Maitrayi Sabaratnam <maitrayi.sabaratnam@oracle.com>
branch nick: mysql-5.1-telco-7.0-constring
timestamp: Tue 2012-09-25 12:35:15 +0200
message:
  Bug#14329309 - ADD NUMBER OF RETRIES AND DELAY BETWEEN RETRIES AS START OPTIONS TO NDBD
------------------------------------------------------------
revno: 4989
committer: Mauritz Sundell <mauritz.sundell@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Fri 2012-09-21 15:57:28 +0200
message:
  ndb - regenerate result for test ndb.ndb_native_default_support
------------------------------------------------------------
revno: 4988
committer: Mauritz Sundell <mauritz.sundell@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Fri 2012-09-21 15:51:42 +0200
message:
  ndb - backport vector changes from 7.2
  
  revision-id: ole.john.aske@oracle.com-20120411084506-pckxeb3s8fvo67sw
------------------------------------------------------------
revno: 4987
committer: Mauritz Sundell <mauritz.sundell@oracle.com>
branch nick: mysql-5.1-telco-7.0-bug14645319
timestamp: Fri 2012-09-21 14:34:28 +0200
message:
  ndb - make online reorg change hashmap size if appropriate
  
  Bug #14645319ONLINE REORGANIZE CAN NOT USE BIGGER HASHMAP ON OLD TABLES
  
  before online reorg never changed hashmap size.
  
  now it either keeps the old size or use the hardcoded
  default hashmap size.
  
  changing hashmap size only occur if the number of
  fragments have increased and the old hashmap size
  is not a multiple of the new fragment count.
  also the bigger hashmap size must be a multiple of
  the old hashmap size to guarantee that data are moved
  from old fragments to new fragments only, therefore
  the old hashmap size is used if that is not true.
  
  this means that after an upgrade from a ndb version
  supporting a smaller hashmap size to a ndb version
  supporting a bigger hashmap size, will be downgradable
  as long as no new tables are created or an online
  reorg have been run after changing the number of
  fragments (implictly by adding nodes, or changing maxrows,
  or explicitly added partitions).
  
  NOTE: neither unique index nor blob tables are reorganized.
------------------------------------------------------------
revno: 4986
committer: Mauritz Sundell <mauritz.sundell@oracle.com>
branch nick: mysql-5.1-telco-7.0-bug14645319
timestamp: Fri 2012-09-21 14:26:26 +0200
message:
  ndb - testcase for check that online reorg can extend hashmap
------------------------------------------------------------
revno: 4985
committer: Mauritz Sundell <mauritz.sundell@oracle.com>
branch nick: mysql-5.1-telco-7.0-bug14645319
timestamp: Fri 2012-09-21 14:25:04 +0200
message:
  ndb - let ndb_desc show hashmap for table and index
  
  added option --table/-t <tablename> to make ndb_desc try to find index.
  
  Example,
  ndb_desc -d test -t mytable 'myindex$unique'
  
  The printing of Tables are moved into NdbDictionary.cpp
  instead of NDBT_Table.cpp.  And methods for printing of
  other NdbDictionary objects are added (using operator<<).
  
  ndb_desc can print Index information.
  
  HashMap are printed for Tables.
  
  Value of FragmentType is now printed as text not number.
------------------------------------------------------------
revno: 4984 [merge]
committer: magnus.blaudd@oracle.com
branch nick: 7.0
timestamp: Fri 2012-09-21 13:16:42 +0200
message:
  Merge
    ------------------------------------------------------------
    revno: 4966.1.4
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Wed 2012-09-12 15:51:08 +0200
    message:
      ndb
       - remove unusued defines from ha_ndb_index_stat.h
    ------------------------------------------------------------
    revno: 4966.1.3
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Wed 2012-09-12 15:49:08 +0200
    message:
      WL#6224 Adapt MySQL Cluster to 5.6
       - remove use of THD and Thd_ndb class in ha_ndb_index_stat.cc
       - create and free Ndb object directly instead
------------------------------------------------------------
revno: 4983 [merge]
committer: magnus.blaudd@oracle.com
branch nick: 7.0
timestamp: Fri 2012-09-21 11:17:48 +0200
message:
  Merge
    ------------------------------------------------------------
    revno: 4981.1.1
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Fri 2012-09-21 10:48:27 +0200
    message:
      ndb
       - Readd(remove and add back) the new internals directories with same
         fileids as they have in the future in order to avoid that bzr go bezerk
         when there is a conflict in one of these files
------------------------------------------------------------
revno: 4982
committer: Maitrayi Sabaratnam <maitrayi.sabaratnam@oracle.com>
branch nick: mysql-5.1-telco-7.0-fixtest-14578595
timestamp: Thu 2012-09-20 17:17:26 +0200
message:
  Bug#14578595: add errorcodes to mtr
------------------------------------------------------------
revno: 4981
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Wed 2012-09-19 10:15:15 +0200
message:
  Fixing red MTR tests:
  
  Tests requiring multiple server connections should be run
  against an embedded server.
------------------------------------------------------------
revno: 4980
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Wed 2012-09-19 08:37:24 +0200
message:
  Fix for Bug#14103195 CLUSTER WIDE SHUTDOWN ON MULTI-NODE POINTER TOO LARGE IN DBDIH (LINE: 9290)
  
  The SPJ block had no information about which table / indexes which actually
  exists, or had been modified or dropped since query execution was started.
  Thus, SPJ might submit (DIH-)request for non existing tables or table 
  versions, which could crash the DIH block.
        
  This fix introduce a simplified dictionary into the SPJ block such that 
  SPJ will be able to check the existence / version of a table it is about to 
  request an operation on.
        
  As this SPJ dictionary has lots in common with the similar dictionary
  in TC, the 'Global Dictionary Manager' - GDM module has been created.
  TC has then been refactored such that DbtcProxy and DbspjProxy
  inherit its 'dictionary proxy parts' from from DbgdmProxy.
              
  In order to create & maintain this dictionary, the SPJ block has been 
  included in the DICT-loop of which blocks that get dictionary change
  notifications.
------------------------------------------------------------
revno: 4979
committer: Maitrayi Sabaratnam <maitrayi.sabaratnam@oracle.com>
branch nick: mysql-5.1-telco-7.0-fix14578595
timestamp: Tue 2012-09-18 16:56:03 +0200
message:
  Bug#14578595: fix the test
------------------------------------------------------------
revno: 4978
committer: Maitrayi Sabaratnam <maitrayi.sabaratnam@oracle.com>
branch nick: mysql-5.1-telco-7.0-14578595
timestamp: Tue 2012-09-18 14:00:01 +0200
message:
  Bug#14578595 - CONCURRENT ALTER TABLE WITH DML GIVES: GOT ERROR -1 'UNKNOWN ERROR CODE' FROM NDB
------------------------------------------------------------
revno: 4977 [merge]
committer: Frazer Clement <frazer.clement@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Thu 2012-09-13 21:31:34 +0100
message:
  Merge 6.3->7.0
    ------------------------------------------------------------
    revno: 2585.188.10
    committer: Frazer Clement <frazer.clement@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Thu 2012-09-13 21:18:47 +0100
    message:
      Bug #14472648 	CONFIGURED DISKCHECKPOINTSPEED EXCEEDED WHEN BACKUPMAXWRITESIZE SET TO HIGH VALU
      
      The DiskCheckpointSpeed mechanism is implemented using 100 millisecond
      periods, which each have 1/10th of the configured quota available.
      
      A period is allowed to overflow its quota, with the excess being taken 
      from the next period's quota.
      
      However, this overflow was limited to the next period, after that, any
      further overflow was ignored.
      
      In cases where large overflows were possible, relative to the 1/10 
      DiskCheckPointSpeed quota, this could result in excessive disk writing,
      and CPU overhead as a result.
      
      Setting a larger-than standard MaxBackupWriteSize is the primary means
      of causing larger-than 2* quota overflows and triggering this bug.
      
      This bug is fixed by using as many subsequent periods as necessary to
      'pay off' the quota overflow.
      
      This will result in the data node staying within its quota.
      
      This fix may result in slower LCP in some systems, and reduced CPU usage
      during LCP.
      
      A testcase, and an internal DiskCheckPointSpeed verification mechanism
      are added to avoid future regressions.
------------------------------------------------------------
revno: 4976
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Thu 2012-09-13 13:32:36 +0200
message:
  Fix for bug#Bug #14577463 CONCURRENT DDL & DML LOAD MAY CRASH MYSQLD (MySQL Cluster)
        
  This fix removes an incorrect free_share() (NDB_SHARE unref'ed) in 
  ndb_binlog_thread_handle_schema_event_post_epoch -> case: SOT_ALTER_TABLE_COMMIT.
        
  This free_share() did not match any get_share(), which caused the NDB_SHARE 
  object referred by 'share' to be prematurely destructed by SOT_ALTER_TABLE_COMMIT.
        
  Any concurrent DML operations using the same 'share', will then have the share 
  object destructed underneath them, leading to later crash.
        
  In addition to this, a few DBUG_ASSERTs has been added to guard against other
  premature releases of 'share'
        
  Furthermore, two consecutive 'if (share)' conditions has been merged
  into a single codeblock.
  
  NOTE: As this area of code has been significantly refactored in
        mysql-5.5-cluster-7.2, this patch is required in another version
        from that branch and onwards.
------------------------------------------------------------
revno: 4975
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Thu 2012-09-13 12:27:55 +0200
message:
  Fix for Bug#14525521 RECEIVER-THREAD MAY BUSY-WAIT FOR 
                       DATA TO BE RECEIVED
  
  This patch fixes two related problems wrt. how
  'NodeBitmask m_has_data_transporters' is maintained:
  
  1) If the remaining part of the already received data inside the
     transporter was insufficient to reconstruct the last signal,
     we should not count this node as 'm_has_data_transporters'.
     This will force us to wait for more data to be recv'ed before
     we can continue processing data from this transporter, and
     thus break the busy-loop.
  
  2) As described in the bug report, 'm_has_data_transporters' mix
     together nodes having data to be recv'ed from the socket into
     the local transporter buffers, and those nodes having 'leftover'
     data in the tranporter buffers which has to be unpacked.
  
     'NodeBitmask m_recv_transporters' has been introduced to keep
     track of those nodes which pollReceive() detected to have data
     to be recv'ed. After being recv'ed, 'm_has_data_transporters'
     will still represent those nodes with data to be unpacked.
  
  This patch also cleans up and simplify the handling of
  'blocked' transporters as a side effect of the above changes:
  
  As the 'has_recv' and 'has_data' state of the transporters now
  are represented on seperate bitmask, we no longer need to save
  blocked transporters with 'has_data' into 'm_blocked_with_data.
  
  Instead we allow any blocked transporters which already 'has_data'
  (Received data buffered in tranporters) to drain these buffers.
  However, the blocked transporters are excluded from receiving any
  more data on these transportere.
  
  When unblocked, the next pollReceive() will detect the available
  data and allow them to be recv'ed.
------------------------------------------------------------
revno: 4974
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Thu 2012-09-13 11:35:22 +0200
message:
  Fix for Bug#14525176 'DUMP 9992' TO SIMULATE BLOCKED TRANSPORTER, AFFECT INCORRECT NODE
  
  node_id, instead if index into transporter array should be used to check/set
  the blocked transporters. 
------------------------------------------------------------
revno: 4973
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Thu 2012-09-13 11:04:41 +0200
message:
  Fix for bug#14524939 NDBMTD CRASH AT STARTUP IF CONFIGURED WITH MULTIPLE RECEIVER THREADS
        
  Fix ensures that idx[i] is initialized even if we break the init-loop
  'if (!recvdata.m_transporters.get(node_id))'
        
  Tescase is running the ndb_basic.test with a config specifying multiple
  receiver threads.
------------------------------------------------------------
revno: 4972
committer: Jan Wedvik <jan.wedvik@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Thu 2012-09-13 10:05:29 +0200
message:
  This is a followup to revno: 4961: 
  'Update of SPJ component in pre 7.2 branches'.
  
  That commit back ported parts but not all of the online upgrade logic for
  SPJ to 7.0. This commit backports the remainder of the online upgarde logic,
  that is, the parts concerning the API and the TC blocks.
  
  This commit also removes testcases that will no longer work (because of version
  checks) from the daily-basic script. This change should be manually reverted
  when merging this commit to 7.2, as the tests are still supposed to work there.
  Instead, a new test that check that it is not possible to SPJ API extensions
  in pre 7.2 releases have been written (and also added to daily-basic).
  
  Finally, a check of the API version has been added in the API, such that it
  is not possible to use SPJ from API client linked with a pre 7.2 API library.
  (Before this commit, it would have been possible to run a 7.0 client against
  7.2 data nodes.)
------------------------------------------------------------
revno: 4971 [merge]
committer: Frazer Clement <frazer.clement@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Wed 2012-09-12 15:18:33 +0100
message:
  Merge 6.3->7.0
    ------------------------------------------------------------
    revno: 2585.188.9
    committer: Frazer Clement <frazer.clement@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Wed 2012-09-12 14:24:49 +0100
    message:
      Bug #14386849 	SCAN RESOURCE LEAK WHEN TC KEYINFO DATA BUFFERS EXHAUSTED
      
      This patch fixes the scan resources leak when TC KeyInfo data buffers are exhausted.
      
      A testcase is added - one which causes a real leak, another which simulates it.
      
      Two new dump commands are added, to give visibility of the pool levels in TC and LQH.
------------------------------------------------------------
revno: 4970 [merge]
committer: Frazer Clement <frazer.clement@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Wed 2012-09-12 14:42:51 +0100
message:
  Merge 6.3->7.0
    ------------------------------------------------------------
    revno: 2585.188.8
    committer: magnus.blaudd@oracle.com
    branch nick: 6.3
    timestamp: Wed 2012-09-12 13:12:19 +0200
    message:
      Fix test case failure which depends on the binlogs content by adjusting
      the used binlog position
------------------------------------------------------------
revno: 4969 [merge]
committer: Frazer Clement <frazer.clement@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Wed 2012-09-12 14:36:10 +0100
message:
  Merge 6.3->7.0
    ------------------------------------------------------------
    revno: 2585.188.7
    committer: Martin Skold <Martin.Skold@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Tue 2012-08-21 09:20:27 +0200
    message:
      ndb - bump version to 6.3.50
------------------------------------------------------------
revno: 4968 [merge]
committer: magnus.blaudd@oracle.com
branch nick: 7.0
timestamp: Wed 2012-09-12 13:53:36 +0200
message:
  Merge
    ------------------------------------------------------------
    revno: 4966.1.2
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Wed 2012-09-12 11:29:29 +0100
    message:
      ndb
       - remove extern keyword from function declarations
       - remove duplicate extern declarations, keep the ones in ha_ndb_index_stat.cc
         since they declare something that ha_ndb_index_stat.cc uses
    ------------------------------------------------------------
    revno: 4966.1.1
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Wed 2012-09-12 11:15:02 +0100
    message:
      ndb
       - mark all functions and variables which are only used within ha_ndb_index_stat.cc as
        being static
------------------------------------------------------------
revno: 4967
committer: magnus.blaudd@oracle.com
branch nick: 7.0
timestamp: Wed 2012-09-12 13:28:45 +0200
message:
  Merge 6.3 -> 7.0
------------------------------------------------------------
revno: 4966 [merge]
committer: magnus.blaudd@oracle.com
branch nick: mysql-5.1-telco-7.0
timestamp: Wed 2012-09-12 11:14:40 +0200
message:
  Merge
    ------------------------------------------------------------
    revno: 4964.1.4
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Wed 2012-09-12 11:12:03 +0200
    message:
      ndb
       - remove unused include file directive from testRedo(the relative path
         didn't properly resolve on all compilers)
    ------------------------------------------------------------
    revno: 4964.1.3
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Wed 2012-09-12 10:41:46 +0200
    message:
      ndb
       - remove unused functions in LocalConfig
    ------------------------------------------------------------
    revno: 4964.1.2
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Thu 2012-09-06 13:32:26 +0200
    message:
      ndb
       - remove unused typedef and extern declaration for ndbout_svc
    ------------------------------------------------------------
    revno: 4964.1.1
    committer: magnus.blaudd@oracle.com
    branch nick: 7.0
    timestamp: Thu 2012-09-06 09:24:56 +0200
    message:
      ndb
       - remove unused function NDB_SQRT
------------------------------------------------------------
revno: 4965
committer: Maitrayi Sabaratnam <maitrayi.sabaratnam@oracle.com>
branch nick: mysql-5.1-telco-7.0-bug14582294
timestamp: Wed 2012-09-05 15:31:33 +0200
message:
  Bug 14582294 - SPJ: GETNODES DOES NOT RETURN CORRCT ERROR CODE
------------------------------------------------------------
revno: 4964
committer: Jan Wedvik <jan.wedvik@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Thu 2012-08-30 10:40:49 +0200
message:
  This commit is a followup to 
  revno 4960 / revid:jan.wedvik@oracle.com-20120822104942-q95h5zi9m729lkkk
  which was a fix for 'Bug#14190114: CLUSTER CRASH DUE TO NDBREQUIRE IN 
  ./LOCALPROXY.HPP DBLQH (LINE: 234)'.
  
  When running ndbautottest, the regression test for bug 14190114 
  (runDropTakeoverTest()) would always fail. The reason for this was that
  another regression test (runBug13416603()) would leave some data nodes
  in a state where they would not start again after stopping because
  of an error insert. This commit fixes this problem by adding cleanup
  code to runBug13416603().
------------------------------------------------------------
revno: 4963
committer: Maitrayi Sabaratnam <maitrayi.sabaratnam@oracle.com>
branch nick: mysql-5.1-telco-7.0-fix-alter-abort2
timestamp: Tue 2012-08-28 13:22:02 +0200
message:
  BUG#14220269 CLUSTER WIDE SHUTDOWN POINTER TOO LARGE IN DBDIH
------------------------------------------------------------
revno: 4962
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Fri 2012-08-24 14:00:05 +0200
message:
  Fix for bug#14143553:JOB BUFFER FULL - DATA NODE CRASH (Blizzard) 
  and duplicate Bug#13799800 NDBMTD CRASHES DURING SONY-QUERY WITH 128 PARTITIONS ON 4 NODES WITH 4 LDM EACH 
  
  The patch extends and redifines the signals GSN_DIH_SCAN_GET_NODES_REQ, _CONF and _REF.
  On order to avoid generating too many signals, which breaks the 1::4 fanout rule for
  signals consumed::produced these signals may now be 'long'.
  
  Both a short and long version of the modified signals are defined.
  The short signal is only used for a single fragment.
  This (the short) is mainly for SUMA and BACKUP which never request info for
  more than a single fragment at at a time. All modified blocks will
  handle both long and short version of the signal. If a long signal
  was received, the reply will also be 'long'.
------------------------------------------------------------
revno: 4961
committer: Ole John Aske <ole.john.aske@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Fri 2012-08-24 13:46:00 +0200
message:
  Update of SPJ component in pre 7.2 branches:
  
  The code for the SPJ block has been added to cluster-7.0 -> even though
  it is not intergrated & used from SQL before version 7.2.
  
  We have a policy of having minimal changes in codebase between the
  different 7.x branches. However, most (all) SPJ fixes has been 
  applied to 7.2 only. This has made it increasingly difficult to
  push a patch to 7.0 and merge it up: If it contained SPJ 
  changes it most certainly would encounter conflicts.
  
  This patch intends to rectify that by aligning the 7.0 SPJ
  code with the 7.2 codebase.
------------------------------------------------------------
revno: 4960
committer: Jan Wedvik <jan.wedvik@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Wed 2012-08-22 12:49:42 +0200
message:
  Bug#14190114: CLUSTER CRASH DUE TO NDBREQUIRE IN ./LOCALPROXY.HPP DBLQH (LINE: 234)
  
  This patch fixes a set of errors that causes node failure (or block new 
  dictionary operations) if the master node crashes in certain states of a drop 
  table operation. This covers bug 14190114 and some related errors that showed 
  up when running the regression test:
  
  1)
  This patch fixes the direct cause of bug 14190114. The patch ensures that
  the master will reject SCHEMA_TRANS_BEGIN_REQ and SCHEMA_TRANS_END_REQ messages
  while there are outstanding DICT_TAKEOVER_REQs. If SCHEMA_TRANS_BEGIN_REQ was
  allowed, the system could end up in a situation where two transactions had
  outstanding DROP_TAB_REQs at the same time. This caused bug 14190114.
  Likewise, SCHEMA_TRANS_END_REQ cannot be processed before the new master knows
  the state of the transaction (i.e. after it has receibed the 
  DICT_TAKEOVER_CONFs).
  
  2)
  This patch fixes an error in the construction of the CONTINUB message that
  DICT sends to itself if it receives a DICT_TAKEOVER_REQ while it still
  has active operations.
  
  This patch also substitutes sendSignal with sendSignalWithDelay. This is done
  for two reasons:
  I) To avoid waisting CPU cycles by doing busy wait.
  II) To prevent CONTINUB messages from filling the jam trace buffer (this made
  the error report fro the customer harder to analyze.)
  
  3)
  This patch disables counting of SCHEMA_TRANS_IMPL_CONF and 
  SCHEMA_TRANS_IMPL_REF messages during takeover. Normally the master counts 
  these to know when all participants have completed an operation. But during
  a takeover, the new master will not know the number of outstanding messages
  until it has received DICT_TAKEOVER_CONF.
  
  4)
  This patch ensures that drop table operations are set to start OS_COMPLETED
  after finishing RT_COMPLETE requests. As it was, these would remain in state
  OS_COMPLETING. This meant that DICT could never send DICT_TAKEOVER_CONF, since
  this can only be done when all operations are in 'passive' states.
------------------------------------------------------------
revno: 4959
committer: Martin Skold <Martin.Skold@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Tue 2012-08-21 09:29:17 +0200
message:
  ndb - bump version to 7.0.36
------------------------------------------------------------
revno: 4958 [merge]
tags: clone-mysql-5.1.63-ndb-7.0.35-src-build
committer: Frazer Clement <frazer.clement@oracle.com>
branch nick: mysql-5.1-telco-7.0
timestamp: Mon 2012-08-13 16:04:51 +0100
message:
  Merge 6.3->7.0
    ------------------------------------------------------------
    revno: 2585.188.6
    tags: clone-mysql-5.1.61-ndb-6.3.49-src-build
    committer: Frazer Clement <frazer.clement@oracle.com>
    branch nick: mysql-5.1-telco-6.3
    timestamp: Mon 2012-08-13 16:00:28 +0100
    message:
      Fix compile failure
