Local Peaks: thoughts, ideas and lessons learned from a web-native CEO/CTO

This is a blog about technology, with a title borrowed from an evolutionary biology concept.  I am a believer that building technology is best done by recognizing, embracing and optimizing the evolutionary forces and processes at work.  We can learn a great deal about how to build world class products and technology by thinking about why zebras got their stripes.  

Free Webinar: Is your website keeping up with the Sales 2.0 revolution?

May 02, 2011

Click here to join us for an informative conversation with selling pioneer and leading business strategist Anneke Seley on how to evolve your sales practices to dramatically increase profitability. 

It's clear that buyers increasingly prefer online communications in the selling process. In response, today's leading edge companies are using Sales 2.0 practices
blending people, process and technology to identify and communicate with potential customers, and lower overall sales costs. As your primary online presence, your website should be at the heart of your Sales 2.0 strategy.

Title:
                
Is your website keeping up with the Sales 2.0 revolution?
Date:                
Tuesday, May 17, 2011
Time:               
11 a.m. PDT / 2:00 p.m. EDT
Presenters:   
  Anneke Seley, Author, Sales 2.0 and Founder & CEO, Phone Works
                           Noah Logan, Moderator, Clickability

Join us on May 17th and learn how to:

  • Create a better, more efficient sales cycle
  • Leverage interactive technologies to create sales-ready leads
  • Use your website to align your sales resources with customer opportunities

May 02, 2011 | » Comments (0)

Monitoring Butterflies and Haystacks

March 20, 2011

Tom recently commented that one of my standard technology slides was out of date.  Apparently we don't track 5,000 service checks and metrics anymore.  We track 10,000!  That is a lot of data and a lot of visibility into our infrastructure and platform operations.

This is a great indicator of how much work continues to go into the continuous improvements of our monitoring and notifications systems.  This is a critical part of the work our technology teams do, and while the implementation work has been primarily a TechOps task, the overall development and refinement of our platform monitoring capabilities has truly been a cross team effort.  I can't remember a Tuesday Architecture Meeting that has not identified some new metric or service check to improve our platform management capabilities.

Monitoring and notification is much, much more that just paging someone if a service is down.  It is about maintaining a high quality of service, reducing the risk while enhancing the effectiveness of change, and understanding the underlying dynamics of the platform for both short and long term planning.  Some key goals of our monitoring systems: 

  • The Butterfly Effect - The Clickability Platform is complex on several dimensions.  It is a complex application, a complex infrastructure, and customers do complex things on the platform.  Finding root causes to problems is only possible by having detailed metrics across the entire platform and an ability to correlate events.  Sometimes if truly feels like finding the butterfly that flapped his wings in China and made it rain in New York (from the origins of Chaos Theory)

  • Needle in the haystack - 10,000 metrics is a lot to sort through to investigate an issue!  Add in all the log files we collect and the task can be daunting.  Being able to find the proverbial needle in the haystack is only possible by having broad and deep monitoring coverage, well organized tools that provide high level dashboard views and drill down capabilities to each individual metric, and a knowledge and familiarity with the platform to sniff out whatever is going wrong.

  • Early warning system - Proper notification is about detecting issues before they become service problems.  Obviously, detecting service problems is critical, but the ability to identify conditions that may led to service issues before they happen is even more mature and powerful.  That way proactive measures can be taken to correct whatever is happening before customer impact and our high quality of service is maintained. 

If you have a few minutes, drop by the TechOps area sometime and ask them to show you Cacti (metric tracking and graphing) and Nagios (notifications).  I think you will be amazed at the depth of information available, the immediate availability of key metrics from top to bottom in the platform, and the amount of work that has gone into developing these fine tuned, critical technology management tools.


March 20, 2011 | » Comments (0)

The "P" Word: Personalization

February 28, 2011

Personalization is a loaded term.  It has been used for decades in online marketing, but with no single consistent, comprehensive definition of what this actually means.  Consulting the leading online reference sites, you will find abstract definitions along the lines of the following:

  1. encyclopedia.com - "Enables more intimate relationships between companies and their customers and is an effective tool for building brand loyalty"

  2. wikipedia - "Using technology to accommodate the differences between individuals"

  3. dictionary.com - "To have marked with one's initials, name, or monogram"


So, if you put #1 and #2 together, you start to build a definition for what personalization means for websites: using technology to accommodate the difference between visitors in order to develop deeper customer relationships and to build brand loyalty.  Sorry dictionary.com, you're coming from an entirely different place on this one...

From this broad, conceptual definition things get even messier when trying to figure out what capabilities solutions actual provide in order to personalize.  It is easy to point to examples such as Amazon and MyYahoo!, but personalization comes in many forms and flavors that require much more explicit identification.  Too frequently software vendors talk about personalization without providing specific details as to capabilities, leaving it up to the customer's imagination to contemplate what is actually meant by the "P" word.

At Clickability, we ourselves haven't been entirely clear, either.  As we have evolved the platform and WMA solutions, we have arrived at the point of generically claiming personalization, but have not explicitly stated what it actually does to personalize the visitor experience.  The following list is a summary of the various capabilities we provide that encompass our ability to personalize:

  • Demographic - The website can be tailored based on the visitor's demographic information, i.e. "zip code", and rules are configured to apply these rules to segments of visitors

  • Firmographic” - The website can be tailored based on the visitor's company's information, i.e. "size of company", and rules are configured to apply these rules to segments of visitors

  • Explicit Data - Our Progressive Profiling engine allows visitor's to answer specific, customizable questions to drive the website experience, meaning the content is different for different visitors based on how a visitor answers a question you ask of them. The differences may be in the content shown on the page, or suggested to visit next through navigation or through modules such as "Might also like" or "suggested content"
  • Implicit Data -  Information gleaned from observing the visitor's browsing pattern is used to make personalization decisions based on interests and behaviors, meaning the content is different for different visitors based on what they do on the site. The differences may be in the content shown on the page, or suggested to visit next through navigation or through modules such as "Might also like" or "suggested content"  
  • Context - Anything about the circumstances of the visit is used to adapt the experience, i.e. "time of day" or "referring website".  For example, a visitor arriving from a partner's site might receive content highlighting the solutions or case studies supported by that partnership while a visitor arriving from an analyst or industry watering hole might receive content that discusses comparative and competitive reviews. 

  • Self-selection – Allowing visitors to select their own interests and means for consuming content is always welcome and gives the visitors control over their own experience.

  • Selected content - Through our Resource Portfolio feature, individual visitors have content specifically selected and targeted to them. For example, Marketing could deliver a known visitor from a prospect that demonstrates a high level of interest in a particular product area an implementation checklist for that product.

  • 3rd Party Data - WMA collects a great deal of data, but there will always be cases where visitor data is in another system, be it CRM, Marketing Automation, ERP, or other.  We integrate with such systems to provide additional depth to the personalization, perhaps including information about Products Purchased or Account Status.


Taken together, these capabilities do allow you to do almost anything to personalize websites (or other channels) on our platform.  It will be left to individual customers to determine which and how these features will be applied to address their business objectives and website goals. 


February 28, 2011 | » Comments (0)

"Open for Business" - Oracle Magazine features Clickability

December 22, 2010

A nice write up on Clickability from Oracle Magazine in a feature story about MySQL and our usage of it in an enterprise environment.

http://www.oracle.com/technetwork/issue-archive/2011/11-jan/o11mysql-194109.html

 


December 22, 2010 | » Comments (0)

eMediaVitals: Moving content to the cloud

October 14, 2010

I had the pleasure of speaking with Ellie Behling from eMediaVitals recently.  She was working on an article about publishers moving to the Cloud for their CMS and website delivery and came across Clickability in her exploration of the options that publishers currently have:

http://emediavitals.com/content/moving-content-cloud

 


October 14, 2010 | » Comments (0)

MySQL Sunday at Oracle OpenWorld 2010

September 20, 2010

Yesterday, I had the pleasure of speaking at Oracle OpenWorld as part of their MySQL Sunday.  I co-presented with Mark Matthews, a Principal Engineer on the MySQL engineering team. 

This joint presentation was the culmination of a path that was started almost three years ago.  Back then, MySQL's Rob Young and an army of MySQL folks piled into our conference room to ask some questions about our use of MySQL: what were our pain points, what tools were we lacking, what did we think of their ideas on new products like the MySQL Proxy and others, etc.

They did a great job of listening to us and other MySQL customers to create tools like the MySQL Enterprise Monitor. Yesterday, Mark talked about the roadmap for these tools, and showed the latest and greatest of the MySQL Enterprise Monitor and Query Analyzer.  Clickability served as the "real world" case study, and I described how we use the Query Analyzer for end-to-end performance management and optimization of our SaaS WCM Platform.

My slides from the presentation are here and thanks to Mark for posting the full version of our presentation here.


September 20, 2010 | » Comments (0)

Surfing the Web with Grandpa

September 12, 2010

This post is less about Clickability and more about the Internet and I dare say, humanity in general.  The last 5 days I was away from the office, visiting family back home in Rhode Island and this afternoon I surfed the Internet with my grandfather for the first time.

He is 92 years old, and in contrast to most of the last 65 years, he talks a lot about his time in the Navy during World War II these days.  His stories are amazing - they are not tales of bravery, or combat, or triumph.  They are mostly stories about small things during a conflict of epic proportions.  They are stories of friendship, camaraderie, and discovery that describe the elements of human experience that he found during an unimaginable time.

I heard many of the best ones again this weekend, such as the making of donuts for the 59 men on his ship based on a recipe in the Navy Cookbook that he had to scale back from serving 1000 men!  And the bottle of black label bourbon he snuck past the shore patrol and hid in his locker, judiciously sharing with his shipmates (he still remembers who) over the course of months rather than in one "blowout" night. 

To go along with the stories were the pictures -  amazing, one of a kind photos.  Cameras were strictly forbidden to sailors and soldiers during the war.  However, with special permission from his Captain, my Grandpa kept his 35mm camera safely stowed beside the 40mm gun which was his post during general quarters, capturing shots of his life in the Navy and the events unfolding around him.

This afternoon, my wife and I found ourselves searching the Internet with him for his ship and shipmates.  We found a couple of pictures (like this one LSN 326), and we found a scanned program from his ship's commissioning, including the roster of sailors aboard.  This was all "user generated content" collected by websites that encouraged veterans to share their photos and stories online.  Yeah, UGC!

At Clickability, when we talk about websites and design our products and services, we are focusing on commercial aspects of the Internet. But the human implications of the Internet are so much greater. Facebook, Twitter and others have created a new social fabric, Wikipedia answers (almost) all of our questions, and the ability for anybody to create content on the Internet means it can truly become an archive of humankind that is open and accessible to all.


September 12, 2010 | » Comments (0)

To Cloud or not to Cloud - that is the Question

August 28, 2010

I am frequently asked questions about whether or not Clickability has considered moving to Amazon or other Cloud infrastructure providers.  This is indeed something that has been discussed and contemplated for years.  In fact, every six months or so since he started at Clickability, Tom C has gone through a cost analysis of moving the Clickability Platform to Amazon.

In general, things are rapidly moving in the right direction to make changes in how our services are delivered, and at some point relatively soon we will likely begin to experiment with a hybrid approach to some services.  However, there are still some challenges to making a headlong plunge into a migration to Amazon:

  • First, a current analysis shows that a wholesale move to amazon would be ~10-20% more expensive than our current infrastructure.  The main reason that this is the case is that we are pretty good a using our servers to high utilization.  The situations where you can make great gains by moving to the cloud are when you are not fully utilizing servers and the "pay-as-you-go" pricing models provide very good leverage.
  • Second, current Cloud provider capabilities and SLA are getting better but still not to the point we could use it without reworking some architectural aspects of our platform.  We rely on sophisticated load balancing capabilities to route customer domains to the right servers, and this level of sophistication simply is not available in Amazon (yet).

Despite these limitations to moving the full production infrastructure to the Cloud, there are many ways that we currently use Amazon, including the dozens of Content Connectors we host, long term log archival, and hosting for our status portal and Community Portal.  When we begin to look at production services in the Cloud, the first ones will certainly be the ones that we can get the most gains from the model, either applications that would benefit from a lot of elasticity (i.e. search servers and indexing) or are a true commodity (like storage).


August 28, 2010 | » Comments (0)

e.Republic Reaps Benefits Working in the Cloud

July 30, 2010

e.Republic, a Clickability customer, was featured yesterday in this nice article on Folio:
http://www.foliomag.com/2010/publishing-cloud-0

The topic is "cloud" platforms and how publishers like e.Republic are leveraging them to deliver their content and online services.  Some of the benefits that e.Republic has reaped from working in the cloud and with Clickability include reduced IT costs, faster time to market with new web projects, and the ability to apply resources more strategically.

Two of my favorite statements:

"e.Republic has seen savings in administrative man-hours of upwards of 200 man-hours per month across the enterprise. "

"e.Republic says its staff is empowered to test more concepts on the Web to find out what works and what doesn’t."

Visit e.Republic's website here:


July 30, 2010 | » Comments (0)

10x'ers: Evolutionary Leaps and Bounds

May 22, 2010

Sometimes evolution comes in small incremental changes and other times it will be in leaps and bounds. 

Back in the fall, we were working on a huge content import as part of a customer implementation.  We were able to sustain content loading at over 10 times previous peak rates!  There was a significant amount of work required to achieve this throughput - from parallelization of the data loading process on Amazon EC2, to reworking the procedure to prevent our search indexing servers from getting overloaded, to seamlessly adding new storage capacity. 

This past week we did it again - we hit another "10x" moment.  Our engineering team reworked some internals on our Lucene based search servers in pursuit of better operational stability.  In the process, we ended up increasing the throughput of the search servers by a factor of 10!  Each search server can now handle 10 times the traffic as before, and do it without any of the stability issues we have faced over time.  Yippee!

Any time you create an order of magnitude change like these examples, it is a big deal.  This is true in technology and any other area of the business.  Just think about these what-ifs:

  • What if you cut implementation times to 1/10th the time?
  • What if your application was 10x faster for users to do tasks?
  • What if you generated 10x the number of leads?
  • What if you traffic increased 10x?
  • What if your service was 1/10th the cost to deliver?
  • What if your company was 10x as large?

Some of these may be far fetched, and others will simply occur as a business grows and evolves.  But each one is certainly a game changer.

Evolutionary leaps are core to the concept of this blog - http://www.localpeaks.com/Local_Peaks.html - and a 10x'er like these may very well be that leap to the next higher peak.


May 22, 2010 | » Comments (0)

If We Build It, Will They Come?

May 04, 2010

Last weekend our new Clickability User Community portal went live.  We published a large library of content as a starting point - over 1,000 documents.  This is now open to our users to search, read, comment on, rate, and even add to.  The better we are a capturing, organizing, and publishing information and documentation for our platform, the more successful our customers will be and the more self-reliant our customers will become.

There was a lot of effort put into building out the portal, and it will certainly take its fair share of care and feeding over time to keep it well organized, accurate and up to date.   Aside from just "knowing" that this new portal was something that we should do, there was some clear evidence from our user base that a strong community experience will be of great value:

  • Three customer recently began to collaborate on their approach to Social Media and Social Networks like Facebook.  Not only is this the type of conversation that we want to be happening between customers, but we want to be in the middle of it.  The new portal provides such a forum.
     
  • Early this year, one of our users built a Firefox extension to facilitate using our platform.  He officially submitted this to Mozilla to share with other users:  https://addons.mozilla.org/en-US/firefox/addon/50550 .  Comments like this one: "Makes testing and debugging easier. Looking forward to incorporating it into my daily work flow" clearly indicate he provided something valuable.
     
  • Another customer recently kicked off a project to clean up their entire deployment and are hungry for best practices to help with this process.  This is a perfect thing to be asking the community and leverage the combined learnings of all of our customers.  By being active participants in these conversations, we can distill and structure the breadth of practices into true best practices for the user base.

So, now that we have built it, will users come?  Will they actively engage with others in the community and with our company through the portal?  Early metrics and feedback are very positive, but communities take time to develop and mature -  this is something that I am very much looking forward to being part of.

 


May 04, 2010 | » Comments (0)

Technology for Today's Toddlers

April 15, 2010

This morning my almost 4 year old son asked: "What's a website?"  I talk about websites inside and out all day long during the week -- but how do you simply answer this simple question from a 3 year old?  It is not about DNS, or HTML, or servers, or lead generation, or brand management, or any of the other things from my daily conversations.  For little Ben, It is more fundamental answer:  "it is where we look at toys on the computer".

This got me thinking about the evolution of technology over time and where today's kids have entered into this evolutionary trajectory.  My conclusion is that it must be a confusing time for the little ones!  Devices are emerging (Hello, iPads and other tablets!), evolving, and converging around them.  Likewise, communication mechanisms and practices are also rapidly changing. 

When I was little, there were three ways to communicate with my grandparents:  phone calls, snail mail, and visits.  Now, this number has exploded to at least 8 with the addition of emails, text messages, voice mails, Skype, and IM.  Each one of these things has nuances and protocols to understand, such as when is it appropriate, what sort of response time is expected, etc.

Below are just a few observations and paradoxes I've seen my 2010-era toddlers process as they explore the technology around them:

  • Why can't you hold something up to a cell phone to show the grandparents like you can when on the computer with them (via Skype)?
  • Why can you email a picture from Mom's phone, but not from Dad's pocket digital camera?
     
  • Why can't you move web pages up and down on the computer like you can on Mom's phone by scrolling on the screen with your fingers? (yes, we have very smudgy computer screens)
  • How do explain what a television station and schedule are to someone who has never seen a video that was NOT on demand via Tivo, DVD, or the Internet?
     
  • "Checking the computer" is a complete replacement for the plethora of phrases my parents uttered, like: "Checking the newspaper", "Check the phone book", "Checking a map", "Checking the calendar", "Checking the encyclopedia", etc.
  • A phone that is actually attached to anything is a source of extreme curiosity and skepticism.  Same thing with any non flat-panel TV or display.

I will finish this with one of my favorite quotes.  It is from a technologist named Alan Kay and he states that:  “Technology is anything that was invented after you were born.”  Something to think about the next time you are watching on-demand video on your iPad in the park...


April 15, 2010 | » Comments (0)

Internet Evolution Generates New Life Forms

March 20, 2010

The Internet continues to evolve: new technologies, new practices, new devices, new standards, etc.  The needs of companies to be effective in this evolving landscape have evolved as well.  A platform can be a sound foundation from the technology side, but it is only part of the overall picture.  Another major part is the people part. 

By "life forms" I, of course, mean the collection of professionals required for a company to live and thrive on the Internet in this evolving environment.  You must have the right people to work on the technology, and you must have the right people to work on the content and the experience.  We have customers who thrive when they have the right people working on things, and other customers struggling at times as they are not "staffed for success".

To be fully equipped for the websites of 2010, there are whole jobs and responsibilities that have emerged to drive marketing and social interaction on the web.  The following are five job titles that simply did not exist 10 years ago, but are now in demand and commonplace on job boards:

  • Blogger
  • Community Manager
  • Content Manager
  • Social Media Strategist
  • User Experience analyst

These are essentially "non-technical" people and we talk a lot about "non-technical" people in our messaging.  Our claim is that we free them from the delays and pain of working through IT for website updates and changes.  This is in fact true - and we do help them keep their content fresh.

As a platform provider, we also need to keep in mind that there are lots of "non-technical" people playing vital website roles for our customers that are not simply publishing content.  We need to continue to keep them in mind and not only ensure that we are serving their needs today, but also keeping our eyes open on how to server the next wave of emerging Internet professionals.


March 20, 2010 | » Comments (0)

"We work in technology. Technology breaks."

February 26, 2010

One of my favorite quotes ever from a customer was during the negotiation of SLA's a few years back.  Their CIO's simple statement of "We work in technology.  Technology breaks." was a very powerful statement - this acknowledges that things will always go wrong with technology, and rather than holding us to a measure of perfection, they were more interested in having rigorous expectations about what happens when things do go wrong.

A standard practice that we follow at Clickability is the creation of Post-Mortem reports for anytime that something does indeed go wrong.  Whenever something negatively impacts our services, we create a report that detail the following:

  • Incident summary - what went wrong and what was effected
     
  • Incident time line - when and for how long the problem existing
     
  • What happened - details of what let to the problem, be it hardware failure, software issues, a process breakdown, human error or a combination of such.
  • What was done to fix it - how did we restore the services to normal operations
     
  • How we will prevent this in the future - details of the immeediate and future infrastructure changes, application updates, process improvements, monitoring additions, or any other means by which we believe we can prevent the issue from happening again. 

These reports are not only distributed internally, but also sent to customers and posted in summary on our public status portal. 

It is this sort of transparency that establishes trust, which is clearly evident from the following email that the support team received this week:

As I've written before, I greatly appreciate these post-mortems.  So many other companies try to sweep problems under the rug, whereas the honesty of taking responsibility for a problem and owning up to it let me know that I can rely on you in the future.

This illustrates why this transparency is so critical in a business like ours - it is what builds and maintains long term relationships.  We work in a business that involves technology and people - there will always be problems of one sort or another.  It is how we respond to such problems that ultimately defines who we truly are to our customers.


February 26, 2010 | » Comments (0)

Unraveling the Tangled Web of Billing

February 03, 2010

Today's topic is the rather unglamorous topic of Billing.  It may not be that flashy, but as a business it is one of the most important things we do - it's how we collect money from our customers! 

I remember a time earlier in Clickability's life when the actual work of stuffing the bills into envelopes was a task that several of us shared each month (my particular expertise was in folding the bills in thirds).  As menial as the job was, It was one of my favorite things to do each month as it was a way to tangibly see the fruits of our labor and over time the stacks of envelopes got higher and the dollar amounts on the bills larger.  We have come a long way since then in terms of improving and streamlining our billing process, but it still remains something of significant complexity.

This process touches almost every team in the company, including the following roles and responsibilities:

  • Account Management - tracking overage issues, and upsell opportunities
     
  • Technical Operations - managing and maintaining the systems that collect and provide usage data
  • Engineering - building and maintaining the Clickability Platform, which includes data collection, aggregation, and reporting
     
  • Professional Services - tracking hours and projects to ensure that implementation work and PS projects are billed appropriately
     
  • Technical Support - tracking hours used working on cases, and deciding what is billable vs non-billable
     
  • Finance - running all of the data through a process to generate bills, rigorously reviewing them for anomalies, and ultimately sending them out to customers.

Not only are multiple departments and people involved, but multiple systems as well:  our CRM to track Support Cases, our Accounting Software to do the billing and accounting, and several of our own technology components.  Changes or breaks downs in any of these systems can cause big headaches in billing.

This topic is top of mind the week for two reasons.  The first is that we had a great "Continuous Improvement Moment" in delivering our January bills.  After some headaches with the December bills, the teams rallied to provide thorough and timely tabulations of billable hours, improvements to our usage billing scripts, and prescreening of data.  The Finance team only had to spend half as much time processing the monthly bills as a result - our CFO personally saved over 10 hours of his time!  Fantastic!

The second reason this is top of mind is we recently concluded that we needed a better approach to managing the end-to-end billing process, particularly around the internal technology and systems behind it.  Initial goals will be to define the end-to-end process, determine ownership of each step, and ensure that change controls are in place so that changes to any one part do not have undesirable consequences.  Ultimately we will look to improve and streamline the processes even more, but for now completely defining, understanding and rigorously controlling the current process will be a great step forward.


February 03, 2010 | » Comments (0)

My Brush with a CMS Lynch Mob at the ASBPE Digital Symposium

November 07, 2009

I spoke at the ASBPE (American Society of Business Publication Editors) Digital Symposium Friday afternoon.  I was in a session with two other speakers discussing CMS platforms and what Editors need to think about in selecting a platform.  We each had about 20 minutes and I was the third speaker.  This is how things unfolded:

Speaker One:  “All Content Management Systems suck.  The ‘S’ in CMS stands for ‘sucky’” -- and this was just the beginning of about 20 minutes of general CMS bashing!  This was not just some disgruntled IT professional either – this gentleman has been around the block at several prominent publishers, leading CMS efforts with platforms that included Vignette, Interwoven, Movable Type, Drupal, etc.   [Speaker One = Fredric Paul, publisher and editor-in-chief, bMighty.com ]

Speaker Two:  Another mildly uncomfortable 20 minutes as the Editorial Director of an online property strolled through your standard list of CMS woes that he had experienced in the last few years: the homegrown system that fell apart after the developer left, the CMS project that was 6 months late and 100% over budget, the ridiculous confines of an inflexible platform, the pain to publish new content, etc. [Speaker Two = Tyler Davidson, Editorial Director, Meetings Media ]

Speaker Three:  When I took the podium I was a bit fearful that the audience had been turned into an angry lynch mob against the token CMS vendor (aka me!) and were about to exact revenge for years and years of painful web publishing experiences!

I resisted the urge to dive into how the Clickability Platform solved all these problems, how we empowered our non-technical users, how we provided BOTH flexibility and control, and all the other wonderful things I wanted to say in defense of our CMS Platform, but I refrained.  Rather, I acknowledged that these pains were exactly why we decided to get into the Content Management business.  We knew CMS was broken and we knew that we could do something new, different and powerful through our SaaS model.  I then moved on to outline our vision for the next generation of websites and WCM.

At the end, it was Fred (aka. Speaker One) who asked me the best question.  His stated that he loved our future vision of Websites and WCM, but questioned how customers can move onto this sort of thing if they are still struggling with the basics?  This is indeed a great question.  With all of the exciting, innovative, and compelling things that you can do on the web right now, we must remember that the vast majority of website publishers are still struggling with the basics.  And these are problems that we solve on a day to day basis for our customers.

Overall, I am thankful that everyone left their rotten tomatoes, pitchforks, and torches at home yesterday, and hope that my fellow speakers and the rest of the audience will indeed take up my challenge: allowing me to prove that there is at least one compelling WCM offering on the market.
 


November 07, 2009 | » Comments (0)

Omniture and Adobe: Missing Steps?

September 17, 2009

Six years ago at Clickability the vision for Clickability's Web Content Management platform was formulated.  A central element of this vision was our idea of the Content Value Chain.  This sequence of steps defines the path that content goes through during its lifecycle:
 

The first three steps are the fundamental activities of all WCMS systems:

  •  “Create” is to author the content or digital asset    
  •  “Manage” is to apply metadata, establish workflow, and organize with other assets
  •  “Publish” is to combine content elements into a fully rendered format

Traditional WCMS systems stop at this point, requiring the customer to figure out the rest of the steps on their own.  However, we saw the value of an integrated solution across the entire value chain and worked to deliver a platform that also did the rest of the steps, too:

  • Deliver” is to transfer the content to the consumer be it as a Web page, an XML feed, an email, or any other mechanism
  • Interact” is when the content consumer does something with the content, saving it, sharing it, commenting on it, etc.
  • Measure” is observing what happens to the content as it passes through the value chain
  • Adapt” is taking what is learned and using this information to optimize the entire value chain or individual steps of it


I recently saw a very similar value chain represented in the context of Adobe’s move to acquire Omniture.  Adobe's point of view on the value of the combination is published on their website and includes the following:
 



Aside from some semantic differences, there is only one key difference between the two value chains – the missing steps of “Manage” and “Publish” from the Adobe and Omniture combination [Aside:  should this be refered to as Adobiture or Omnobe?].

What does this mean?  Well first off, it is great to see the idea of the Content Value Chain being recognized as an important way of viewing content and content processes.  Secondly, it also means that to really have a complete solution, Adobe needs to be thinking about how to include management (and publishing) capabilities into their overall offerings. 


September 17, 2009 | » Comments (0)

When will IE6 be extinct?

August 06, 2009

IE6 is the current nemesis of web developers everywhere.  As of the last release of our CMS platform at Clickability, we have finally discontinued support for IE6.  When released in 2001, IE6 was state of the art, now it is considered, well, something that a dung beetle would be quite found of.  

Once software is released, it will typically evolve for a period of time with patches and updates, but at some point a new faster, smarter, bigger product or version will come along that will put the original software on the slow march towards extinction. SaaS is a bit different in that it can evolve can keep pace with the evolutionary forces at work - this is indeed one of the key benefits of the SaaS delivery model.

I found this the other day and love the approach that Weebly has take to speeding things along for IE6: www.ie6nomore.com/

 


August 06, 2009 | » Comments (0)

Most Popular - a quick retrospective

May 19, 2009

I had an interesting conversation with The Numbers Guy (aka Carl Bialik) from the Wall Street Journal yesterday.  He was researching the idea of popularity and how it affects peoples’ choices.  One of the areas he was exploring was the Most Popular lists that are now standard on all premium media websites.  He published a blog posting today about this, entitled The Growing Popularity of Popularity Lists .

At Clickability, we have been providing a Most Popular service since 2001, at which point we started aggregating the data behind our EMAIL THIS, SAVE THIS, and PRINT THIS  products into the Most Popular.

Several interesting aspects of these lists came out during our conversation:

1)    Most Popular lists started as standalone pages/features (ie. You click to a full page that contained the most popular articles), and over time evolved into page components and widgets.  We actually had a widget for this in 2002, but had very little uptake on it as embedding 3rd party page components was not standard practice at the time.  Evolution of web publishing practices to be accommodating (and even relying on) 3rd party components and the emergence of standards like RSS changed everything in the use of embeddable lists.

2)    These lists were some of the earliest forms of “social media” – they provided a voice back to the publishers about the content.  This was either in a passive was (by tabulating page views) or in a more active way by using stats from tools like “email this to a friend” or ratings to generate the most popular lists.  In fact, Digg and others like it are actually an extension of the “most popular” from one site to the Internet at large.

3)    With some of our customer, the Most Popular feature leapt from being just an end user feature, to an editorial tool.  The managing online editor at a premiere news brand was a shining example of this.  He reviewed the up to date most popular list throughout the day as an ongoing decision making tool.  The Most Popular was also reviewed in the daily editorial meetings as well.  In the end, we built some special analytics tools (the Most Popular Tracker and Calendar) specifically for people like him to rapidly asses the lists as part of their editorial role.

4)     The most popular lists had also jumped from just being on the website to other publishing channels as well.  Places that I have seen Most Popular that we power propagated:

a.    RSS feeds into My Yahoo! and other personal portals
b.    Within of periodic email newsletters
c.    Shown and Reviewed on television (by CNN)
d.    Published in newsprint the following day
e.    Published in periodical magazines


5)    The metrics behind the most popular have also changed.  The first metrics where viewed, and shared (by “email this” type tools).  This has evolved into ratings, most commented on, most blogged about, most searched for, etc.  NY Times has some really nice things in this area.  Also, segmentation on most popular lists based on geography is now showing up – makes a lot of sense for those with global audiences.

While the basic functionality of the Most Popular lists has remained the same, they certainly have evolved over time (almost a decade!) since they first appeared.


 


May 19, 2009 | » Comments (0)

LAMMP gets an extra M

May 04, 2009

I recently had the pleasure of connecting with Patrick Galbraith who works in the realms of MySQL, memcached, open source, web development and other such things.  He is working on finishing up a book entitled “Developing Web Applications with Apache, MySQL, memcached, and Perl” and was looking for some real world applications using memcached (as we do at Clickability).

His book adds another “M” (for memcached) to the standard LAMP stack as this has become a staple building block for building scalable web platforms. 

This got me thinking – we are close to the LAMP stack, using Linux, Apache and MySQL.  However, we use Java instead of P(erl|HP|ython).  Is there an acronym for us?  JAML, LAMJ, JLAM?  A quick foray on Wikipedia indicates that there is no clear winner to date:

  • JLAM: No results.
  • JAML: “Junctional Adhesion Molecule-Like… composed of two extracellular immunoglobulin-like domains, a membrane-spanning region, and a cytoplasmic tail involved in activation signaling”.  Interesting, but not really what I was looking for.
  • LAMJ: This at least redirects to the LAMP page, so this would appear to be the front runner.

Please share your opinion on the poll in the right column and definitely let me know if you have any other great ideas for it.

 

BTW – Patrick’s book is available for pre-order on Amazon here.
 


May 04, 2009 | » Comments (0)

MySQL User Conference 2009 Presentation

April 22, 2009

On Clickability's 10th Birthday I was pleased to speak at the MySQL User Conference.  It was an interesting conference coming on the heels of the news of Oracle buying Sun, who just over a year ago bought MySQL.

Click here for the presentation.  The accompanying narration, sidebars, and jokes available by request.


April 22, 2009 | » Comments (0)

Phases of SaaS Hardware Purchasing

March 06, 2009

SaaS is a business model requiring an investment in infrastructure -- and hardware costs money.  There are many different approaches to deploying and managing hardware, and many different preferences on the choice of hardware for the job.  As a SaaS business starts up and matures, I believe there are three phases the company will go through in purchasing hardware and the overall needs of the business itself changes. 

 

  • Bootstrap - Unless you are backed by big VC bucks from day one, you are likely ramping up on a shoe string budget.  Frequent cost saving measures include buying generic hardware, skimping on support, and finding deals on the used equipment market (including eBay!)  This is a period when you are figuring out how the platform will operate and where the scaling points and operational needs truly are.
     
  • Growth - During the growth phase, the company is growing, the platform traffic is hopefully growing rapidly and technical resources are best devoted to quickly deploying proven infrastructure building blocks and building a stable, scalable platform.  The last thing you want during this period is to loose momentum by spending time troubleshooting unreliable hardware, or time tuning and tweaking the hardware before it is ready for prime time.  This is a time to buy brand name servers (with support!), top quality network gear, etc.  The added expense will be worth it in terms of ability to execute quickly.
     
  • Mature - When a SaaS platform hits a certain point of maturity, scaling issues should have been recognized and addressed.  There should be enough fault tolerances built into the platform such that anticipated hardware failures are seamlessly handled.  At this point, you have the opportunity to increase margins by optimizing the hardware cost.  It's time to revisit generic servers, time to benchmark different brands, and figure out how to get the best bang for the buck.  Every dollar saved goes right to the bottom line.

Not all companies will go through each of these phases and of course there are those who will choose to outsource all of the infrastructure and operations from the outset.  For those who choose to forge ahead themselves, think about how to maximize the value of your hardware investments, and don't be afraid to either go higher end or lower end depending on the evolving needs of the business.


March 06, 2009 | » Comments (0)

To Infinity and Beyond!

February 22, 2009

"Web Scale" is a phrase I use quite frequently.  I encourage engineers to think in "Web Scale", and consider such things as "how will this work when things get really, really big?", or "what happens when you have 10x or 100x that many users?"

I believe that there are three categories of scale:

  • Prototype Scale - This is the first phase of things.  You never quite know what you get when you start out building a product or application, but by following some good technology practices (ie. using database indexes correctly) you can be reasonably confident of getting something that works that you can provide to   some real customers and users.  Only then can you truly start figuring out what needs optimizations and scalability work.
     
  • Enterprise Scale - You have hit Enterprise Scale when you are confident that you can service the largest customer that you are likely to get.  For installed software, you may actually be able to stop here.  But for those providing SaaS solutions or Internet services, there is a critical next step.
     
  • Web Scale - When you hit Web Scale, you have identified and solved all scaling challenges.  Everything in the platform will scale linearly such that you can not only service your largest single potential customer, but you can service any number of such customers of that size. 

There are several big name internet companies that pop to my mind when I think about Web Scale: Google, Yahoo!, Amazon.  There are even some SaaS platforms that can make that claim, like Salesforce.com and Omniture.  One thing that these companies all share is a well architected and crafted platform that uses distributed computing and sharding to take HUGE challenges and make them small and repeatable. 
 


February 22, 2009 | » Comments (0)

Sharing the Load? Or Sharing the Poison?

February 10, 2009

Our network engineer and I have been going back and forth for a while now about load balancing strategies, both at the web/application layer and also at the database layer.

It is clear that there are two competing interests at work:

1)    To distribute load and provide redundancy and high availability
2)    To limit the propagation of problems and confine issues

The former is clearly driven by one of the key principles of SaaS – that by running the application for many customers in a single instances, economies of scale in the infrastructure can be applied to such challenges as creating high availability.  

However, blindly allowing the automated pools and failover of all resources has danger too.  There is the potential that problems will spread from one server or pool of servers to other servers and potentially to the entire platform (“Sharing the Poison”). 

Such things do happen.  We have seen it ourselves repeatedly - there is always that one customer who does things a little bit different and hits that one query that crushes the database load, or the traffic spike of a magnitude that nobody ever expected, or the crippling bug that is exposed out of the blue by a user.

Is there a load balancing schema that both maximizes high availability and also protection?  We have no silver bullet yet, but it seems that by making some tradeoffs, configuring a combination of segmention and pooling, and having "safe" failsafe mechanisms, that we can potentially strike a nice balance of these forces.
 


February 10, 2009 | » Comments (0)

MySQL User Conference 2009

February 03, 2009

I will be speaking at the MySQL User Conference again this year, presenting a session titled:  Clickability: Scaling SaaS with MySQL and Memcached.  My basic plan is to walk through the evolutionary history of the Clickability platform and how it evolved from a few servers to hundreds of servers across multiple data centers delivering hundreds of millions of pageviews. 
 

An evolutionary story?  Certainly.  Cell division?  Absolutely.  Primordial soup?  Perhaps.
 

I hope to see you there! 
 

 


MySQL Conference & Expo 2009

Clickability: Scaling SaaS with MySQL and Memcached

http://en.oreilly.com/mysql2009/public/schedule/detail/6929

 

3:05pm Wednesday, 04/22/2009
Location: Ballroom G
 

 

Building a SaaS platform requires application and infrastructure engineering that push beyond “enterprise scale” to “web scale”. MySQL and Memcached play a key role in scaling the Clickability platform. This presentation will tell the evolutionary story of the core architectural and technology components that have allowed the Clickability Platform to scale from 0 to 400 million pages delivered per month without changes to the core platform architecture.


February 03, 2009 | » Comments (0)

"Rubbing Technical Antennas"

January 19, 2009

One of my favorite quotes ever from talking to a customer is when someone mentioned “getting the engineers together to rub technical antennas”.  What better metaphor could be used to portray the idea that engineers and technical people just plain communicate differently with each other. 

[Interesting side note: Antenna has two plurals. Antennae is used for the jointed, movable, sensory appendages occurring in pairs on the heads of insects and most other arthropods, while Antennas is use for a conductor by which electromagnetic waves are sent out or received. In this case, I think antennae would have been more appropriate, but I am just relating what was said.]

Being able to communicate effectively about technology with technical people is obviously a critical function for any CTO.  However, equally important is the same effectiveness in communicating about technology with non-technical people.  The true duality of this jumped to the forefront for me last week as I was immersed in days straight of Power Pointing. 

The two diagrams below are of the same thing (our technology platform). The first was created as part of a marketing/investment pitch deck, while the latter was prepared as part of our architectural planning process.  Are they each effective in their intended context? Yes, I’d say so.  Are they interchangeable?  Certainly not!  I don’t even think that I used the same parts of my brain to create them.


 

 


 

The ability to communicate in both the technical and non-technical context is something that I continuously work at – in fact, I consider it one of the most important parts of my role, perhaps just below driving technology vision.

And finally, a closing message for all the other techies out there:  bzzz – buzzzz – bizzizz – uzzzib – bizzo.


January 19, 2009 | » Comments (0)

And the winner is Jaguar!

January 14, 2009

Thanks to all the Local Peaks visitors who voted on what animal we should name our "J" release.  Voting is officially closed.  The final results were:

Jackal       | 13%
Jackrabbit | 14%
Jaguar      | 47%
Jerboa      | 25%

Please stay tuned for future Local Peaks polls, including additional release naming opportunities.




January 14, 2009 | » Comments (0)

Iteration 1, Day 1 of multiple scrum teams....

December 17, 2008

Today was a milestone for our development team at Clickability.  We have made the quantum leap from one scrum team to two scrum teams and today was the first day of the first sprint as such. 

Please consider this post as an initial report of what we are doing, to be followed up with future posts as to what has worked, what we have changed, and what has been challenging.

As of today, each scrum team consists of:
 

  • 3-4 java engineers
  • 1-2 QA engineers
  • 1 Product Manager
  • A tech writer (too frequently over looked!) that serves both teams


Members from these groups serve in the roles of:
 

  • Product Owner (the PMs)
  • Scrum master (one of the Engineers)
  • Tech Lead (another one of the Engineers)

[Note: We have found the role of Tech Lead to be particularly important right now as we have a lot of team members who have only recently joined the company.  If everyone on the team had been here for a long time, probably not as important to distinguish]

There are definitely things that we are concerned about in doubling our number of scrum teams.  The top ones and how we plan on addressing them are as follows:
 

  • Knowledge sharing – all code reviews will be performed across scrum teams to prevent the silo’ing of information
  • Parallel teams / single code branch – this is both a challenge and an asset.  Conflicts will occur, but we have a build engineer to keep things running smoothly and by maintaining a single branch, our continuous integration environment can uniformly deliver on a, well, continuous basis
  • Coherent design and architecture - we are forming an architecture committee that will consist of members of each scrum team that will ensure that architecture and design decisions are made consistently and in the context of "the bigger picture"
  • Consistent practices - over the past year, the team has collaborated to produce a well documented set of best practices around code style, refactoring practices, etc. that will serve to maintain consistency in the code base regardless of how the team operates as a whole
  • Stand up meetings – who goes first at the daily standup meeting has already been decided by a coin toss…

Below is a picture of our expanded scrum board, or as I have taken to calling it, the “double barrel scrum board”.  If it looks like it takes up an entire wall, that is because it does!  If you are ever in the neighborhood, feel free to stop by and look at all the PostIt’s.
 


 

 

PS - Those who have read my bio will realize that the last time I was dealing with 2 scrum teams was in a very different context....


December 17, 2008 | » Comments (0)

No “four legged fish”, please.

December 03, 2008

Evolution is a powerful force in nature.  Through a long sequence of small, sometimes imperceptible, changes, land mammals became whales and apes became man.  But such roads are not straight and narrow - evolution makes mistakes, it goes off on tangents, it starts in one direction only to be steered back in another.  The products of such digressions are the out-of-place creatures such as Ichthyostega, the “four legged fish”.  Some oddities linger around for a while in their obscurity, but for the most part, they are forgotten.

In software development, these evolutionary offshots are such things as "custom" or "one-off" features.  How many SaaS platforms out there have platform code specific to individual customers?  Does

      if(customerid==1234){

look familiar?  I suspect more SaaS companies have such code in production than would like to admit it. 

Sometimes deals require one missing feature or enhancement to be added.  Whenever possible, it is far better to either turn the “special” request into a legitimate feature for all customers to use, or to simply say “No thanks, four legged fish are not allowed here”. 

However, in some extreme cases (particularly with early stage products and companies) the right thing to do is actually to make that custom addition and acquire the new customer.  CAUTION: this must be done with acceptance that this WILL (and I repeat, WILL) come back to haunt you at some point.  These custom additions are difficult to maintain and there is usually only one of two people who may remember how the code is supposed to work when it either breaks or needs changes.  Certainly not a scalable or sustainable practice.

The “four-legged fish” Ichthyostega is not the "missing link" between marine and land animals, but rather one of several short-lived “experiments”.



December 03, 2008 | » Comments (0)

Details of our MySQL Query Analyzer Use Case

November 20, 2008

Last week I was interviewed by Charlie Babcock from InformationWeek about the MySQL Query Analyzer.  I love tools that provide immediate, actionable information, and the Query Analyzer is just that.

His article ( http://weblog.infoworld.com/openresource/archives/2008/11/mysql_query_ana.html ) presents a few facts from our results and I chose to publish a more detailed description about it as I think there are some very interested things to learn from our experience.

Optimizing a SaaS platform is a never ending task.  I'd like to think that after 6+ years of running our platform in production, and scaling it multiple orders of magnitude, that we've eliminated the obvious bottlenecks.  There are just not any killer queries left that are simply too slow. 

Having eliminated those low hanging fruit over time, we were left with writing Perl scripts to parse SQL logs and other rudimentary analysis mechanisms.  From the first use of the Query Analyzer, it was apparent that the data available in it was opening up a new door of analysis to us, focused less on manual inspection and more on collected statistical information. 

Below is a description of our first use of the tool several months ago while it was in alpha.

 

Test case

1) We pointed one of our production application servers (a website publisher) to MySQL Query Analyzer instead of the MySQL database server.  The Query Analyzer proxied the requests to the database, capturing statistics and metadata on the fly.

2) We let the Query Analyzer gather statistics over a 20-30 minute period of time.  This was live traffic - not a controlled test environment or benchmark test.

3) We analyzed the statistics to determine what was the most "expensive" query.  This was determined by looking at several of the statistics that the Query Analyzer records, such as Most Frequently Run, Most Records Return, Largest Result Set (Bytes), and Most Processing Time. 

In the end, the chosen query was NOT the most frequently run query, nor was it the slowest query.  Rather it was the one the collectively used the most processing time, returned the largest results sets and also ranked in the top 10 in terms of frequency. 

We had a software engineer take a look at the code and figure out how to optimize it.

Optimization

The query in question was intended to load the "placement list" for a website section.  It is a single table query that filters based on date ranges and performs a sort based on a rank value.  The query is well indexed and very efficient for small result sets. 

However, we realized that the query was unbounded and that over time, some of the result sets have grown from the hundreds to the tens of thousands of records.  There were two optimizations that the engineer coded:

1) The sort was removed from the SQL query and performed in the java code once the result set was returned as we like to push work out from the databases and distribute it in the application layer when possible.

2) We added logic to the code such that a bounded query was run first.  If this satisfied the data need, great!  If not, the unbounded query was run.  In the end, we end up running more queries, but the average result set size (and cost) is much, much lower.

Results

We have a benchmark test that we run against the website publishing application.  It serially hits the server with several hundred URLs from real websites that we publish.  This isn't a perfect "real world" test as it doesn't take into account concurrency, but by running the test against a cold server (nothing cached in memory) it does provide an indication of raw speed of execution.

The baseline benchmark times before any optimization was done, averaged over five minutes long at 5:27.

After the code was optimized, the benchmark times dropped to an average of 2:54, a 1.9x performance improvement!

For the amount of time put into the analysis and ultimate optimizations (less than two days of total analysis and engineering time), this represents a HUGE win from a scalability, performance, and cost saving perspective.  It was only possible from the database usage and load nuances provided by the deep inspection of the Query Analyzer.


November 20, 2008 | » Comments (0)

Barack takes spotlight and steals web traffic

November 10, 2008

Election day 2008,and then again the day after, was a record setting traffic day for us at Clickability.  Many of our customers are traditional media companies, like TV stations and newspapers, and events like elections (and tornados!) are huge traffic drivers.  The 2008 elections were no exception.

However, the most notable aspect of the traffic pattern this day was not the overall volume (this was not a surprise) but rather it was what happened at 9pm PST.  When Barack Obama took the stage in Chicago, there was a dramatic and instantaneous drop in traffic to the media sites that we deliver. 

People stopped clicking and just listened. 


As soon as the speech was over, the traffic once again bumped up to where it was before.  A historic moment for the country, and a fascinating moment in the evolving social dynamics around media and the web.


November 10, 2008 | » Comments (0)

So, Is SaaS Cloud Computing or Not?

November 07, 2008

Cloud Computing is rapidly becoming the buzz word du jour.  As with all emerging buzz words, it is a term that means many different things to many different people and a consistent industry wide definition is yet to emerge.  Some like to take a narrow approach and define Cloud Computing as only pure, virtualized utility services like storage and CPU usage.  Others have created a much broader approach to defining Cloud Computing, encompassing everything from utility services to MSPs to SaaS to  seemingly anything else that connects to the internet.

I’d say that my own current definition has been altered by a conversation I had last week with Eric Knorr, Editor in Chief of InfoWorld.  I used to take the narrower view of Cloud Computing, and distinguished that Cloud Computing was truly the realm of “infrastructure components” (ie. Amazon S3) while Saas was the domain of full on demand applications (ie. Salesforce.com).  Eric started our discussion with the  premise that “the cloud“ was originally a metaphor for the internet, and therefore Cloud Computing encompasses all services, platforms, and applications accessed over the internet.  When explained this way, the broader view of Cloud Computing makes sense to me – it adds some structure to the fuzzy topics .  It also aligns well as a starting point in the discussion of the value proposition of cloud computing, as each of these sub-areas is geared towards the same goal: allowing IT organizations to add on-demand capacity or functionality to their overall IT landscape.


November 07, 2008 | » Comments (0)

Is there a Chief SaaS Officer in the house?

October 16, 2008

This summer, I moderated a roundtable sponsored by the SIIA R&D Board.  A group of SaaS technology executives were discussing sustainable SaaS practices.  It covered a broad range of topics, including security, application development, rollout methodologies, etc.  It was amazing how many times the conversation shifted back to some legal aspect of things – from SLAs, to negotiating with partners, etc.  The CTOs and VPE’s sitting around that table (myself included!) are clearly being called on to perform analysis and duties far outside the traditional scope of product development, engineering or research and development. 

This seems to be a prevalent theme in SaaS businesses, where complex partner and business relationships meet head on with complex technology and service models.  People look to the CTO/VPE’s as the guys with the answers for such complex things as they are the ones who connect all the pieces together from a technology perspective.  At one point someone suggested that maybe there is a new role that is emerging at SaaS company: the Chief SaaS Officer. 

This would be someone with great technical aptitude, but also someone with the business acumen and legal training to be able to construct meaningful partner agreements from both a technology and business point of view, someone who could build SLAs that matched both the technology and legal requirements, and someone that can handle many of the ancillary tasks that are currently failing to the technologists.
I do not know of any Chief SaaS Officers out there right now, but I suspect that this role is currently distributed throughout the executive teams of most SaaS businesses with much of it failing onto the technologists among them.


October 16, 2008 | » Comments (0)

MySQL UC 2008: Mitigating Database Replication Latency

September 15, 2008

At last year's MySQL User Conference, I gave a presentation titled "Mitigating Replication Latency in a Distributed Application Environment"  This was a fun presentation to do as it highlights what I think is one of the thorniest issues we've encountered building the Clickability Platform. 

We have a distributed platform environment that is built around the loose coupling of applications around a replicated database infrastructure.  There is always latency in database replication and we've gone through several iterations of how to create a reliable cache clearing mechanism to account for such.  The presentation is available online here:

http://en.oreilly.com/mysql2008/public/schedule/detail/1762


September 15, 2008 | » Comments (0)

About Local Peaks

September 01, 2008

This is a blog about technology, but with a title borrowed from an evolutionary biology concept.  I am a believer that building technology is best done by recognizing, embracing and optimizing the evolutionary forces and processes at work.  We can learn a great deal about how to build world class products and technology over time by thinking about why zebras got their stripes.

Evolution is driven by a process described as Natural Selection, whereby a set of environmental pressures force species to favor specific mutations and adaptations over time - camels developed humps to store water in arid regions, cacti developed spines to keep would be snackers away, and zebras got their stripes to confuse incoming predators.  These selective pressures which not only relate to survival of the individual organism ultimately will define the entire species' ability to perpetuate in the face of these environmental pressures.

One way to think about natural selection is that it drives the species up an evolutionary hill.  There will be periods of rapid change and periods of slow change, there will be big leaps and there will be small steps, but as long as the environmental pressures keep pushing "uphill", the species will continue to climb towards the peak.

Species that reach the peak (or are getting close to it) may seem to appear to be reaching the pinnacle of evolution.  However, it is possible that the peak they are approaching is just a local peak.  Perhaps it is just one of the smaller peaks in a mountain chain, perhaps it is a false summit, with the hardest climbing yet to come, or perhaps over the course of the ascent, the environmental conditions have actually changed and the peak itself has take a new shape. 

The local peak is superseded by these higher peaks that represent greater ability to survive and greater proliferation of the species.  But alas, a species maybe stuck on the local peak (at least for a while) without the ability to jump peaks.

With software too, this idea of local peaks must be recognized.  Are we too focused on climbing the hill in front of us to recognize the bigger opportunity in the distance?  Is there false hope that the summit we feel we are approaching is the true peak?  And perhaps the biggest question of all:  when we see we are on a local peak, can we jump our path to the higher summit?

In writing this blog, my goal is to capture some of the thoughts, ideas and lessons learned that I've had in the course of my tenure as a "web-native" CTO.  I am passionate about such things as Software as a Service, Agile development methodologies, and open-source technologies.  I will share successes and failures and in the end hope that some of the things shared here will help evolve your technical endeavors. 
 


September 01, 2008 | » Comments (0)

© 2014 All Rights Reserved by Jeff Freund | Technorati Profile