Thursday, October 26, 2017

Upgrading Continuent Tungsten software us Tuesday, October 31st for free training, as we take a look at the process for upgrading your Tungsten installation. We will cover:
  • Reviewing release notes
  • Obtaining the software 
  • TAR vs RPM installation
  • Rolling upgrade vs all-in-one
  • Rolling back training is for all engineers using either Tungsten Clustering or Tungsten Replicator, and is valuable for those who are looking to upgrade to the latest release in the near future.

Thursday, October 19, 2017

Continuent Road Map: One year after restart... Where next? may know Continuent Tungsten for our highly advanced MySQL replication tool, Tungsten Replicator, and for our state-of-the-art MySQL clustering solution, Tungsten Clustering. Our solutions are used by leading SaaS vendors, e-commerce, financial services and telco customers.

But there are more, many more, Tungsten deployments out there. Tungsten Replicator can be used for real-time data loading into analytics, from MySQL and Oracle into Cassandra, Elasticsearch, Kafka, Redshift and Vertica. What you may not know, Tungsten Replicator is also an Oracle replication solution, the "Oracle GoldenGate without the price tag”. And there could be more…

How about Tungsten Backup? Using the power of the Tungsten Transaction History Log (THL), we may create the ultimate continuous backup solution with flexible point-in-time recovery. Would you be interested, especially for free? What about the ultimate proxy, a stand-alone Tungsten Connector? To support our Clustering solution, Continuent has developed one of the most advanced proxies available. Could it be time to unleash Tungsten Connector for general use?

Join our live webcast, Wednesday 10/25, to learn more on what Continuent is planning during the next twelve months!

Wednesday, October 11, 2017

MySQL backup, recovery and provisioning within a Continuent Tungsten Cluster

Join us for this training session where we discuss tools for backing up a MySQL database within Tungsten, and how Tungsten makes it easy to re-provision databases and recover a cluster. Tuesday, 10/17 at 9:00 am PT/12:00 pm ET. Sign up today at
In this session we will cover: 
  • Methods and tools for taking a backup
  • Verifying the backup contains the last binary log position, and the importance of having this
  • Restoring backups into a cluster
  • Provisioning slaves from an existing datasource
This training is for all engineers using Tungsten Clustering and will explain the tools needed to create a sound backup and recover strategy, and is also valuable for those who wish to review their current backup and recovery procedures.

Friday, October 6, 2017

MySQL native replication vs. Continuent Tungsten

What MySQL replication is, what it's not, and when is Tungsten a better choice? Find out Wednesday 10/11, 9:00-9:30 am PDT, at

We will cover topics such as:
  • What native replication is (and isn’t),
  • Fan-in/fan-out topologies,
  • Heterogenous replication
  • Powerful filtering functionality and
  • Clustering for HA/DR and zero-downtime maintenance.

Friday, July 14, 2017

Upcoming Webinar - Wed July 19th What is New in Tungsten Replicator 5.2 and Tungsten Clustering 5.2?

Continuent Tungsten Webinar: 
What is New in Tungsten Replicator 5.2 and Tungsten Clustering 5.2?

Continuent Tungsten 5.2 is just around the corner. This is one of our most exciting Tungsten product releases for some time!

In this webinar we’re going to have a look at a host of new features in the new release, including 
  • Three new Replication Applier Targets (Kafka, Cassandra and Elasticsearch)
  • New improvements to our core command-line tools trepctl and thl
  • New foundations for our filtering services, and 
  • Improvements to the compatibility between replication and clustering

This webinar is going be a packed session and we’ll show all the exciting stuff with more in-depth follow-up sessions in the coming weeks. 

You’ll also learn about some more exciting changes coming in the upcoming Tungsten releases (5.2.1 and 5.3), and our major Tungsten 6.0 release due out by the end of the year.

So come and join us to get the low down on everything related to Tungsten Replicator 5.2 and Tungsten Clustering 5.2. on
Wednesday, July 19, 2017 9:00 AM - 9:30 AM PDT at

Monday, June 19, 2017

High Noon at AWS — Amazon RDS versus Continuent Tungsten with MySQL on AWS EC2.

Please join Continuent Webinar on Wednesday, June 21 at 9:00 AM PDT/noon EDT:
Continuent customers have a large number deployments in AWS running MySQL on AWS EC2 instances and they choose to rely upon Tungsten Clustering to provide High Availability (HA) and Disaster Recovery (DR). We also support Multi-Site/Multi-Master operations and offer true zero-downtime operations.

During the Webinar we look at the following questions and more: 
* How does RDS handle failover? (Hint: Not very quickly) 
* How does RDS handle read scaling? (Hint: Not very well) 
* Can you do zero-downtime maintenance with RDS? (Hint: No) 
* Is RDS cheaper? (Hint: No, not really)
High Noon at AWS — Amazon RDS versus Continuent Tungsten with MySQL on AWS EC2.

Friday, June 2, 2017

New Continuent Webinar Wednesdays and Training Tuesdays

We are just starting to get into the swing of setting up our new training and webinar schedule. 
Initially, there will be one Webinar session (typically on a Wednesday) and one training session (on a Tuesday) every week from now. We'll be covering a variety of different topics at each. 
Typically our webinars will be about products and features, comparisons to other products, mixed in with product news (new releases, new features) and individual sessions based on what is going on at Continuent and the market in general. 
Our training, by comparison, is going to be a hands-on, step-by-step sequence covering all of the different areas of our product. So we'll cover everything from the basics of how the products work, how to deploy them, typical functionality (switching, start/stop, etc), and troubleshooting. 
All of the sessions are going to be recorded and we'll produce a suitable archive page so that you can go and view the past sessions. Need a refresher on re-provisioning a node in your cluster? There's going to be a video for it and documentation to back it up. 
Our first webinar is actually next Thursday (the Wednesday rule wouldn't be a good one without an exception) and is all about MySQL Multi-Site/Multi-Master Done Right: 
In this webinar, we discuss what makes Tungsten Clustering better than other alternatives (AWS RDS, Galera, MySQL InnoDB Cluster, and XtraDBCluster), especially for geographically distributed multi-site deployments, both for disaster recovery and multi-site/multi-master needs.

If you want to attend, please go ahead and register using this link:
Keep your eyes peeled for the other upcoming sessions. More details soon. 

Thursday, May 25, 2017

Continuent Releases Tungsten Clustering and Tungsten Replicator 5.1.1

Continuent are pleased to announced that Tungsten Replicator 5.1.1 and Tungsten Clustering 5.1.1 are now available to customers to download.

For both products, 5.1.1 is an important bug fix releases that addresses discovered in the 5.1.0 release.

Tungsten Clustering 5.1.1 includes an important fix for a memory leak that occurs when a long-running monitoring tools may exhaust the memory.

Tungsten Replicator 5.1.1 includes some important fixes:

  • Addressing a problem with tungsten_provision_slave and tungsten_provision_thl where the process could fail to complete properly. 
  • A small improvement to the dsctl command to make it easier to use, including an option to output the current position status from dsctl as a version of the command itself. 
  • A fix to trepctl backup to address an issue when the index file is corrupt, or the dis runs out of space. 

Full release notes are available here: 

If you have any questions or issues, please use the usual support links and tools. 

Tuesday, April 18, 2017

Continuent -- Product Management Newsletter -- April 2017

Tungsten Release 5.1.0
We are nearly ready with our new 5.1.0 release. It is a y-release, and therefore contains some new functionality that we want to share. In particular:
  • A simplified method for sharing tpm diag information
  • Some minor usability improvements to the thl command
  • The first phase in improving our filters, including a standard JS library for the JS filters, and upstream filter requirements (for example, pkey in heterogeneous appliers)
  • Numerous improvements to our core Hadoop support and associated tools
  • Some further fixes to tpm, net-ssh, and improvements to tpm and Ruby compatibility

The code is currently going through QA right now, so we expect to release in April. For more information, join us on the Webinar on the 19th April, details at the bottom of this newsletter!

Percona Live 2017 - Continuent Talks

Continuent is a Diamond Sponsor for Percona Live in Santa Clara, CA from the 24th-27th April. We have five different sessions if you would like to come along and meet us, or learn more about our products:

  • Continuent is back! But what does Continuent do Anyway? Tuesday, April 25- 9:20AM - 9:45AM, Eero Teerikorpi, CEO and Continuent Customers
  • Real-time Data Loading from MySQL and Oracle into Analytics/Big Data Tuesday, April 25 - 3:50 PM - 4:40 PM at Room 210, MC Brown, VP Products
  • Keynote Panel Wednesday, April 26 - 9:25 AM - 9:45 AM at Main Scenario, MC Brown, VP Products
  • Multi-Site, Multi-Master Done Right Wednesday, April 26 - 1:00 PM - 1:50 PM at Room 210, Matt Lang, Director of Professional Services - Americas
  • Spread the Database Love with Heterogeneous Replication Thursday, April 27 - 3:00 PM - 3:50 PM at Ballroom B, MC Brown, VP Products

We will of course be doing our best to share the info and presentations after the conference, but if you are near Santa Clara, or are visiting the conference anyway, please drop by the booth and meet the team.

Continuent Product Release Webinar on May 3, 2017 at 9am PST/1pm EST/5pm BST

We are holding another product release webinar on the 19th April where we will look back at some of the issues and challenges in 4.0.7 and how we are approaching 5.1.0, and our new release schedule.

We’ll also give you a sneak peek at 5.1.0 features and a look forward a few months to the upcoming releases in this quarter, and maybe even some experimental items.

Also, of course, you’ll get a chance to ask questions.

Improving Replicator Status and Usability

One of my own personal frustrations with the replicator (and at different times clustering) is that when we are executing large transactions and trying to identify what the transaction error is for a specific issue, sometimes it can be very difficult to see what is going on and why the replicator seems ‘stuck’. Furthermore, the status output from the trepctl command is not really very useful, since it shows a lot of information that is not helpful in comparison to the few statistics you really want.

So, we’ve done two things that will appear in an upcoming release (probably 5.2.0). Getting information about the THL is the first step, and the best way to do that is to extend the usability of the thl command. There are a few different elements to this, so first we’ve added more ways to select the THL entries. In 5.1.0, we’ve already added aliases from the ‘-low’ and ‘-high’ options to the more user friendly ‘-from’ and ‘-to’ options:

$ thl list -from 45 -to 65
In 5.2.0 we’ve added -first and -last, and these both default to showing one single transaction (either the first or last stored), or you can supply a number and get the first or last N transactions:

$ thl list -last 5
This helps pin down a transaction more easily, and makes it quicker to view a failing transaction instead of doing ‘thl list -low 575’, then ‘thl list 588’, etc. to get the one you really want.

These positioning requirements also work with other options, so if you extracted or applied a bad or corrupted single transaction (for example, because of a disk full error) and need to remove the last one, you can do:

$ thl purge -last
...without first having to work out which one is the last one!
From a large-transaction standpoint, we’ve also added a summary view of the size of transactions. So when you have a massive single transaction (thousands, hundreds of thousands or millions of rows) over one or more sequences, you can now check the transaction size without trying to judge from the output of ‘thl list’. For example, with a single massive transaction you can get a summary to understand the overall size:

$ thl list -sizes
1384 121 2017-04-07 10:09:31.0 125 chunks SQL 0 bytes (0 avg bytes per chunk) Rows 19072 (152 avg rows per chunk)
1384 122 2017-04-07 10:09:32.0 123 chunks SQL 0 bytes (0 avg bytes per chunk) Rows 19216 (156 avg rows per chunk)
1384 123 2017-04-07 10:09:32.0 123 chunks SQL 0 bytes (0 avg bytes per chunk) Rows 19266 (156 avg rows per chunk)
1384 124 2017-04-07 10:09:32.0 90 chunks SQL 0 bytes (0 avg bytes per chunk) Rows 14209 (157 avg rows per chunk)
Event total: 15421 chunks 0 bytes in SQL statements 2401906 rows

So if you do a trepctl status and see transaction 1384 is being processed, but the replicator is not doing anything, at least we now know that with 2.4 million rows it’s going to take a while to apply.

And, of course, the new position options work too:

$ thl list -sizes -last
...gets you the size of your last transaction.

In terms of the overall replicator status, because of the way the replicator works, it’s difficult to get information mid-transaction. The replicator is a state machine, and status is only reported at the end of the transaction. However, we’ve added a simple indication that the replicator is processing and that we know it’s still working by showing how long we’ve currently been applying an event:

$ trepctl status -name tasks
timeInCurrentEvent    : 379.218

It’s not hugely descriptive, but you can see how long we’ve been processing the current event, and know we haven’t hit an error (or the status would be offline). Match that with a quick thl sizes command on the current event and you can make a quick ‘oh, it’s just a big transaction’ determination.

Two other areas of the trepctl command will also be improved in 5.2.0. First, we’ve built a simplified status command that provides the key information, along with sizes and units, and simplified overall output depending on the current replicator state. This is a bit more human-readable and contains the key information you need to identify the current state:

$ trepctl qs
State: Online for 4398.38s, running for 5879.502s
Latency: 0.531s from DB commit time on ubuntu into THL
        2149.685s since last database commit
Sequence: 19193 last applied, (1362-19193 stored)

Of course if something goes wrong, you get more information:

$ trepctl qs
State: Faulty (Offline) for 3.06s because SEQNO 18012 did not apply, reason:
   Event application failed: seqno=18012 fragno=0 message=java.sql.SQLException: Statement failed on slave but succeeded on master

This is a quicker way to see the important information without a lot excessive detail and static settings, like the current version or timezone setting, which don’t change between installs.

In terms of understanding what the replicator is doing, we’ve improved the statistical output that you could previously get from trepctl status -name tasks and -name stages. The output from these is not very clear - in particular, it’s not really explaining what each stage is actually doing, or indeed what the counters mean. The ‘applyTime’ for example is actually the total time spent applying since the replicator last went online, not the last apply, and not since the replicator was started.

In place of that we have unified the output and made the content clearer:

$ trepctl perf
Statistics since last put online 1222.798s ago
Stage                |      Seqno |     Latency |      Events |  Extraction |   Filtering |    Applying |       Other |       Total
binlog-to-q          |      18009 |      0.315s |        1987 |    829.958s |      0.599s |     30.914s |      0.094s |    861.565s
                                          Avg time per Event |      0.418s |      0.000s |      0.000s |      0.016s
                                           Filters in stage | colnames,pkey,pkey
q-to-thl             |      18009 |      0.315s |        1987 |    734.173s |      2.011s |    124.850s |      0.523s |    861.557s
                                          Avg time per Event |      0.369s |      0.001s |      0.000s |      0.063s
                                           Filters in stage | enumtostring,settostring

Now it’s much easier to see what is configured, the correct sequence, and whether, say, filter execution time is what is ultimately affecting your latency.

These are still in their rough-cut early beta status, but we would welcome any feedback you have on the output and what you’d like to see in the output.

Kafka Applier for Tungsten Replicator

We know that many of our customers are interested in a Kafka applier, and some in a Kafka extractor, and the requests and interest is increasing substantially.

We have already done some work on this in terms of understanding the development load, and the basics of how the Kafka applier (in particular) would work. There are some important considerations when looking at the applier and the replicator.

Fundamentally, the replicator expects to replicate entire transactions between systems. That’s fine if each transactions only updates one or two smaller rows in a table, but what happens with larger updates? Depending on the environment and use case individual transactions can be huge.

Current thoughts go along these lines:
Have a configurable full transaction/individual row selection in the applier
Allow for splitting large single transactions into multiple messages if possible automatically

Other questions:
What formats do we want to write the data in; JSON, CSV?
Should we batch items (as we do with our BigData items), or just push a continuous stream
Any kind of rate-throttling required?
What kinds of tags/topics to use?

There are many other questions still to answer, but if you have thoughts and want to share, please let MC ( know.

Wednesday, February 15, 2017

Continuent Product Management Newsletter, February 2017

Welcome to the first newsletter in 2017 for Continuent. 

Hope everybody has had a good start to the new year. We did at Continuent, starting off with another developer meeting while we pin down our plans for the coming year.

In this Continuent Product Newsletter we cover:
General Information
  • London Developer Meeting
  • Continuent Release Schedule
  • New Replicator Appliers
  • Ask Continuent Anything -- on Thursday 23rd, Feb at noon PST/3pm EST
New Releases
  • Release 5.0.1 Coming Soon
  • Release 4.0.7 and Net::SSH

London Developer Meeting

We had a customer meeting in London in January where we got the opportunity to spend a couple of hours with one of our customers and understands their needs and concerns. That was useful, partially because they love our product (which we obviously like to hear!) but also because their ideas for improvements match with our future vision of changes we are planning. That also gave us the opportunity to discuss in the dev management team a little more about our medium to long term goals.

Continuent Release Schedule

One of the many decisions we made at the London meeting was to change our release schedule. Our new schedule is designed to get fixes updates and new features into customer hands as quickly as possible. To achieve that, we’ve started by specifying our release levels first. They follow the traditional x.y.z model, where:

  • Z-releases contain bug fixes and minor improvements
  • Y-releases contain all previous bug fixes and mid-level functionality improvements and new features
  • X-releases will contain major new functionality, for example a new replicator source, cluster solution, or major improvements to core components.

In line with that basic overview, we’re aiming to to release:

  • Z-releases will happen every 6 weeks
  • Y-releases will happen every 3 months
  • X-releases will happen at least once a year

We’ll also be stricter about our version support process. All releases will be supported for a maximum of two-years from each Z release. All existing customers and versions will be supported up until the end of this year (2017). After that, we’ll start enforcing the time limits. Since we will have a new major version out of every year, a customer should never be more than two years and ergo two versions behind with a release.

More importantly overall, of course, it should mean that our customers can plan their updates knowing when the next release will come out. It should also mean that if you are waiting for a fix after reporting a bug, and we haven’t given you an explicit patch, you should have to wait a maximum of six weeks before the next release.

New Replicator Appliers

Continuent currently offers the following heterogeneous replication solutions:
  • Extraction of the real time transaction feed: from MySQL, AWS RDS and Oracle
  • Applying in real-time into following databases: to Oracle, MySQL, Hadoop, Redshift, and Vertica

One of the other topics that came up at the meeting was the need for additional appliers into a variety of other databases and targets, including, but not limited to, Kafka, Flume, Couchbase, MongoDB (we support it already, but improvements have been requested) and some improvements to our Hadoop applier environment. We’re open to further requests and suggestions and I’m collecting information right now. If you want to discuss it, please contact me ( directly.

Online Q&A -- Ask Continuent Anything

Eric Stone, our COO, will be hosting an ‘Ask Continuent Anything’ session on Thursday 23rd, Feb at noon PST/3pm EST.

You can ask us anything here, technical, business, product specific, product requests, history lessons. We’ll try to answer any questions you have, and if we can’t provide an answer, we’ll follow up with you after the session. We’ll include a summary of those questions in the next newsletter.

Please join the meeting from your computer, tablet or smartphone:

You can also dial in using your phone.
United States: +1 (571) 317-3116; France: +33 (0) 170 950 590; Germany: +49 (0) 692 5736 7206; United Kingdom: +44 (0) 20 3713 5011
Access Code: 548-812-605

Continuent Release 4.0.7 and Net::SSH

Release 4.0.7 should be out within a week of this newsletter and is going through final testing right now. We’ll announce it accordingly when it’s ready.

Before the end of last year, we released 4.0.6 which contained many fixes, but most importantly tried to address the issues with compatibility of the Ruby Net::SSH modules used both during staging installs and within some of the command-line tools.

Unfortunately, this proved to be more complex than we expected. To summarize (from the Net::SSH docs):

  • Ruby 1.8.x is supported up until the net-ssh 2.5.1 release.
  • Ruby 1.9.x is supported up until the net-ssh 2.9.x release.
  • Current net-ssh releases require Ruby 2.0 or later.

This presents a problem when we are trying to work with a wide range of Ruby versions, made more complex by the fact that even relatively new releases of Linux OS distributions still work with old (and outdated) versions of Ruby. We have originally bundled Net::SSH to make the distribution easier to work with. Unfortunately, customers using a distribution that includes, say, Ruby 1.9.x as standard (Ubuntu, including 16.10 LTS) then have compatibility issues.

To address that, in both 4.0.7 and 5.0.1 we are no longer including Net::SSH. Instead, you’ll need to install the right Net::SSH version according to your installed Ruby and Linux distribution for the Tungsten tools to use. Full instructions will be included in the release notes.

Continuent Release 5.0.1 Coming Soon

As per the above release schedule, and our promise in the last product newsletter, release 5.0.1 is going through final testing right now and we hope to release it within a week of this newsletter. It contains a variety of fixes and improvements from the 5.0.0 release, while making it simpler and easier to install and upgrade from previous versions.  

NOTE: Release 5.0.1 will also be our standard expected release for all customers going forward.