# Response:

Hello! This #WebTortoise post was written 2013-OCT-31 at 09:36 AM ET (about #WebTortoise).

## Main Points

#- Happy Halloween.

#- Use Cycle Plots to compare cyclical points (e.g. from a time-based graph) right next to each other. For example, compare the Response Time of Monday, the 28th right next to the Response Time of Monday, the 21st right next to the Response Time of Monday, the 14th right next to …

#- Chart/Graph Name: Cycle Plot. Shows: Performance, Availability and/or Reliability

## Story

Okay, here’s the situation (your parents went away on a week’s vacation?). Are looking at a traditional time-based, #WebPerf line graph showing last week’s Response Times and want to compare some type of cycle (e.g. this Monday versus last Monday versus the prior Monday versus etc or, this noon hour versus the last noon hour versus the prior noon hour versus etc). However, due to the nature of the time series, are unable to do this very easily.

Enter the Cycle Plot.

A Cycle Plot is a type of line graph useful for displaying cyclical patterns. Cycle plots were first created in 1978 by William Cleveland and his colleagues at Bell Labs (Bryan Pierce, A Template for Creating Cycle Plots in Excel).

In this Webtortoise Story, will use a Cycle Plot to see the hours of the day side-by-side. Specifically, we’ll compare the Response Time for each of the 24 hours in a day, for each day of the week.

Straight away, notice Response Time fluctuation potentially effect of peak load versus non-peak load. Going from Midnight (hour, “0”) to Noon (hour, “12”) and then back toward the end of the day (hour, “23”), can see the Response Times rise and fall. Can also see the Response Times on SAT-SUN (weekends) versus MON-FRI (weekdays) are also less (further bolstering the peak versus non-peak theory).

Now, let’s take the above time-based graph and show:

– The Midnight hour of SUN next to the Midnight hour of MON next to the Midnight hour for the rest of the weekdays
– The 01:00 am hour of SUN next to the 01:00 am hour of MON next to the 01:00 am hour for the rest of the weekdays
– And so on, for each respective hour for each respective weekday

Resulting in the following Cycle Plot:

Reading the Chart: In the above chart, first notice how the breakdowns are switched.  Where were showing days below the hours, are now showing the hours below the days.  In each of the 24 hour “panels” along the X axis, the blue lines are the individual data for each day of the week where the orange lines are the average for that same respective day.

Now lets “Zoom In” to hopefully crystalize the chart reading.

The Midnight hour:

The 01:00 AM hour:

## Insights from the Chart

Now knowing how to read the chart, here are some initial observations:

– Take a look at the orange line averages and see how the overall Response Times definitely rise, starting at around @ 6-7 am
– Take a look at the orange line averages and see how the overall Response Times definitely fall, starting at around @ 7-8 pm
– Within each hourly panel, the first data is SUN and the last data is SAT. Can then say the weekends are faster than the weekdays (could see this in the regular time-based line graph, too, to be fair)
– The Variance/Deviation for some of those peak hours appear to be much higher than for some of those off-peak hours (Just eyeballing e.g. the 11:00 PM or Midnight hour, versus e.g. the 08:00 AM or Noon hour).

## Just For Fun

Remove the orange line averages and instead replace them with a second-order trendline. The resulting chart is:

In the above graph, the blue lines are still the individual data. Have just replaced the orange averages with black trendlines.

Notice the shape of most trendlines to be either a candy cane ‘hook’ shape or a small ‘mountain top’ shape. For those trendlines not following that pattern, might investigate further to see why they are different (possible causes: maintenance windows, incidents, releases or etc).

Fair warning: The thought to add a second-order trendline came from the previous observation of the weekends being faster than the weekdays. Recall the first data in each series was SUN and the last data in each series was SAT, so the hook/mountain top shapes re-enforce this.  Knowing which graph to use in which situation comes from experience and trial & error; don’t be afraid to ‘play with your graphs’.

_The following is optional reading material._

#CatchpointUser #ChartsAndDimensions #KeynoteUser #Performance #SiteSpeed #WebPerformance #Webtortoise #WebPerf #WPO #DataVis

#ExcelCyclePlot #ExcelPanelChart #ExcelManuallyCalculatingTrendlines

# Response:

Hello! This #WebTortoise post was written 2013-AUG-22 at 10:57 AM ET (about #WebTortoise).

## Main Points

I had a chance to sit down with Jurgen Cito yesterday and we talked about various Web Performance “stuffs”. One of those stuffs was whether or not there was a spot for Pareto Charts in the Web Performance / WebTortoise Realm.

What do you think?

## Story

Modified Pareto Chart (number):

Modified Pareto Chart (percent):

Looking at the above chart, there are two vertical Y axes:

One of them is a count; the other is a percentage.
One of them is not-cumulative; the other is cumulative.
One of them is a LOG; the other is not a LOG.

At a glance, this chart does not present information effortlessly. It takes a little bit of effort to read and understand (see, Daniel Kahnemann, “Thinking, Fast and Slow” for System 1 versus System 2). But once you do put in the effort, then there is more value to be had.  For example:

– 53% of the Response Times were below 1,300 ms

– 93% of the Response times were below 2,000 ms

Now, I did have to go to the chart data (download link just below) to get those exact numbers.  Perhaps if we play with the chart format a bit? Add some labels (that LOG axis makes it a little trickier to even estimate the corresponding non-LOG % value, for example)?

Interesting….  So many charts…  So little time…

_The following is optional reading material._

#CatchpointUser #ChartsAndDimensions #KeynoteUser #Performance #SiteSpeed #WebPerformance #Webtortoise #WebPerf #WPO

#ExcelParetoChart #ExcelOgiveChart

# Response:

Hello! This #WebTortoise post was written 2013-JUL-31 at 06:44 PM ET (about #WebTortoise).

## Main Points

#- Big Data doesn’t have to be the biggest; it has to be just Big Enough.

#- Sampling versus not sampling can affect your information both negatively or positively. For example, on one end of the spectrum, not sampling at all has effect of missing transient blips or subtle pattern changes. Where, on the other end, sampling at an extremely low rate has effect of being noisy, choppy or volatile.

#- Always remember Performance versus Availability. For example, the rate for your passive Performance data may be different than the rate for your active Synthetic data.

#- Nothing is perfect. Therefore, everything is imperfect.

## Story

In this Webtortoise story, going to look at the impact of sampling as it pertains to web performance measurements. Started with a week’s worth of data (no particular reason for a week’s worth; just have settled on that as a default time period), totaling @ 42K test samples (@ 250 per hour).

In this below chart 1, we’re looking at both a Median and Arithmetic Mean Average Performance chart calculated using all of the 250 Synthetic Test Samples per hour. Nice and smooth… Can see some fluctuation during peak vs non-peak… Arithmetic Mean versus Median is not too large of a delta… All in all, not a bad looking chart.

Now compare with this below chart 2 except are randomly selecting from the same data set to plot based on 50 test samples per hour.

Now compare with this below chart 3 except are randomly selecting from the same data set to plot based on 10 test samples per hour.

Last, now compare with this below chart 4 except we’ve applied a basic data smoother to the “10 samples per hour” chart.

Then put chart 1 and chart 4 side-by-side! If the chart titles were removed, would you be able to pick the one at 250 test samples per hour versus at 10 test samples per hour?

_The following is optional reading material._

#CatchpointUser #ChartsAndDimensions #KeynoteUser #Performance #SiteSpeed #WebPerformance #Webtortoise #WebPerf #WPO

#Sampling #DataSmoothing #Statistics

# Response:

Hello! This #WebTortoise post was written 2013-JUN-12 at 02:45 PM ET (about #WebTortoise).

## Main Points

#- Use panel charts for certain web performance data to make reading them easier.
#- Will need to manually calculate trend lines for panel chart data as the Excel built-in trend line function(s) ‘spans panels’.
#- Excel Line Chart, Excel Panel Chart

## Story

In Webtortoise World, are constantly looking at Performance and Availability data by various dimensions. Can look at data by Host (Performance of Host1 vs Host2 vs etc), by ISP (Performance of Verizon vs L3 vs etc), by Geography (Performance of East Coast vs West Coast vs etc) and by etc. The problem is can be tough to discern and understand when there are a lot of chart data on a single chart.

Enter panel charts.

Panel charts take multiple chart data and split them into separate ‘panels’, while still being on a single chart (the illusion is of multiple charts, though). These panel charts are generally shown side-by-side and have the benefit of using the same axes (by default).

A non-panel chart:

This first chart is not a particularly atypical chart type. It shows respective median Response Times, for a 24-hour period (2013-JUN-06 to be exact), by ISP. It could be Synthetic data, could be RUM data or could be data from any other instrumental ‘ruler’. Is not important this Breakdown is by ISP (the Breakdown could be anything really). What IS important is to concede how tough it is to read the individual measurements. Who’s the worst performer? Who’s the best performer? Are they all following the same trend?

The panel chart:

Is taken the above line chart and made into this below panel chart, with each of the respective ISP’s Performance data in their own panel. With [intentionally] no additional formatting, can more easily read the information (No additional formatting was applied so as to compare only the layout change. Purpose is to convey the value of the panel chart based on its own merit).

The panel chart with additional formatting:

One thing would not ever attempt to do with the first chart is add a trend line for each series! Shudder to think how much of a hot mess that’d have been! With the panel chart, though, adding trend lines makes the chart even more valuable (In this case, added 2nd-order Polynomial trend lines as might expect a Performance pattern to be similar to peak traffic pattern). Note to add trend lines to each panel means to manually calculate the trend line for each series (see article link in the below optional section) as Excel’s built-in trend line capability ‘spans panels’.

Let’s go back and answer some of those initial questions:

Q: Who’s the worst performer?
A: ISP 7 is clearly the worst performer.

Q: Who’s the best performer?
A: ISPs four and six are neck-and-neck for ‘the best performer’.

Q: Are they all following the same trend?
A: No! While each other ISP has a slight Performance degradation during peak traffic, ISP 5 is actually trending down!

_The following is optional reading material._

Manually Calculate Trendlines in Excel http://www.slideshare.net/ksatyamahesh/computing-trendline-values-in-excel

PeltierTech Article on Panel Charts http://peltiertech.com/Excel/ChartsHowTo/PanelChart1.html

#CatchpointUser #ChartsAndDimensions #KeynoteUser #Performance #SiteSpeed #WebPerformance #Webtortoise #WebPerf #WPO
#ExcelManuallyCalculatingTrendline #ExcelPanelChart #ExcelTrellisChart #SmallMultiples

# Response:

Hello! This #WebTortoise post was written 2013-MAY-16 at 11:34 AM ET (about #WebTortoise).

## Main Points

#- Analyze Availability by various Dimensions, e.g. Hour of Day or Minute of Hour, to look for patterns.

#- Performance infers Availability. Performance may be measured if and only if Availability = 1 (your choices are either 1 or 0; either something’s available or something’s not available).

#- We monitor Availability; we measure Performance.

#- Don’t go it alone. When working to uncover patterns in your Availability and Performance data, will need the help of others in the Organization.

## Story

I thought I’d break away from the normal second-and-third-person writing style of Webtortoise to write this more intimate, first-person post. Lately, I’ve been feeling bad for my buddy, Availability (In this Webtortoise Story, my buddy’s name is, “Availability”). You see, Availability’s cousin, Performance, has been getting all of the limelight. I mean, don’t get me wrong, Performance IS sleek and sexy while Availability IS binary and boring, but the only reason we’re able to talk about all these advancements in Performance is because of their JOINT efforts!

So much attention has been given to Performance lately that I am seeing more and more folks forget, or casually glaze over, Availability! The problem here is: Performance _infers_ Availability. That is, if it’s not available, then you cannot measure it [for Performance].

So please, help me spread the word and remind folks to never forget about their ol’ buddy and friend, “Availability”.

And now, your obligatory Webtortoise chart:

In this chart, we counted the number of Availability strikes (a.k.a. errors) for several days. Then we plotted the COUNT by Minute of Hour.

In this first chart, there is no special formatting. But can still see some high errors counts.

In this second chart, have highlighted and called out the discovered pattern! At first, the guessed pattern was incorrect because was trying to find a *single*. However, after pulling in some more people resources, was able to figure out there were *multiples*.

In this specific case, these patterns were caused by *two separate* log shippings, across two different subsystems, affecting page load (i.e. the page was not available)! And had it not been for a Performance Management Program, may never have discovered these Patterns!

_The following is optional reading material._

#CatchpointUser #KeynoteUser #Webtortoise #Performance #WebPerformance #SiteSpeed #ChartsAndDimensions #Availability

# Response:

Hello! This #WebTortoise post was written 2013-APR-30 at 09:35 AM ET (about #WebTortoise).

## Main Points

#- Here’s to the statisticians of the world!

## Story

– Why did the statistician cross the road?
— He wasn’t sure.

– A statistician can have his head in an oven and his feet in ice, and he will say that on the average he feels fine (http://math.bnu.edu.cn/~chj/Statjokes.htm).

– A new government 10 year survey costing \$3,000,000,000 revealed 3/4 of the people in America make up 75% of the population (http://www.ahajokes.com/m027.html).

– According to recent surveys, 51% of the people are in the majority (http://www.ahajokes.com/m027.html).

– Statistics play an important role in genetics. For instance, statistics prove that numbers of offspring is an inherited trait. If your parents didn’t have any kids, odds are you won’t either (One passed by Gary Ramseyer, taken from http://stats.stackexchange.com/questions/1337/statistics-jokes).

– Final Exam: A statistics major was completely hung over the day of his final exam. It was a true/false test, so he decided to flip a coin for the answers. The statistics professor watched the student the entire two hours as he was flipping the coin… writing the answer… flipping the coin… writing the answer. At the end of the two hours, everyone else had left the final except for the one student. The professor walks up to his desk and interrupts the student, saying, “Listen, I have seen that you did not study for this statistics test, you didn’t even open the exam. If you are just flipping a coin for your answer, what is taking you so long?”
The student replies bitterly (as he is still flipping the coin), “Shhh! I am checking my answers!” (http://math.bnu.edu.cn/~chj/Statjokes.htm)

– Statistics are like a bikini. What they reveal is suggestive, but what they conceal is vital (Aaron Levenstein, taken from http://www.workjoke.com/statisticians-jokes.html).

_The following is optional reading material._

#CatchpointUser #KeynoteUser #Webtortoise #Performance #WebPerformance #SiteSpeed #ChartsAndDimensions

#StatisticsJokes

# Response:

Hello! This #WebTortoise post was written 2013-MAR-31 at 09:35 PM ET (about #WebTortoise).

## Main Points

Question: How do I tell if the Response Time of my website is affected by traffic load (e.g. peak versus non-peak)?

Answer: Use an Hour of Day chart to correlate whether or not web traffic load affects Response Times. These non-time-based dimension charts allow you to aggregate data over more than one day if, for example, you wanted to look at several days/weeks/etc., but without having to plot several data in a time series.

A traditional time-based line chart may very well answer the asked question. However, at times, may be easy or necessary to look at long periods of time by Hour of Day, especially if there are subtleties to discover. In these examples, are being looked at three months data.

## Story

Consider the following two statements, which convey the same idea of change each in a different way.

`ABSOLUTE:  Our sales went from \$1 last year to \$2 this year!`
`RELATIVE:  Our sales increased 100% year-over-year!`

In this Webtortoise post, will look at Response Times, as they vary through the day, in both Absolute (chart 2) and in Relative (chart 3) terms.  The effect of saying the same thing in a different way may be more profound, but must “remember to remember” the context of the overall picture.

Hour of Day charts, similar to Day of Week, Minute of Hour or other non-time-based charts are powerful ways to analyze the Performance and Availability data of your website. Was asked this question and, in researching, discovered a particular page performing worse than intended, especially compared to another like page on The Company’s site.

This first chart shows the average number of hits (for a 3-month period).

This second chart shows the Response Times for two pages on The Company’s site (for the same 3-month period).

This third chart shows the Response Times for the same two pages as in Chart 2. In this chart, however, the Response Times have been converted to percentages to make them relative on the same scale.

This fourth chart shows all three chart series in one location, with the # Visits on the Primary Axis and the Response Times on the Secondary Axis. Fair warning, this chart is misrepresenting because [intentionally] was removed the Primary and Secondary Axis labeling to avoid confusion.

Now, are talking about the second and third charts for a moment. Because Page 1 and Page 2 (on the second chart) are on the same Y axis, was not so easy to see Page 1 performing substantially worse during peak traffic. However, when changed to a relative % in the third chart, was more easily able to see the Performance delta.

_The following is optional reading material._

#CatchpointUser #KeynoteUser #GomezUser #Webtortoise #Performance #WebPerformance

#ChartDimensions #HourOfDay #MinuteOfHour #DayOfWeek #Percentile #Histogram

# Response:

Hello! This #WebTortoise post was written 2013-FEB-28 at 06:15 PM ET (about #WebTortoise).

## Main Points

#- Consider the instrumentation of different Performance measurement tools before looking at their respective measurement data.

#- Measure web assets (e.g. websites, pages and/or apps) as an output of many different inputs (In Webtortoise World, we are talking about Real User Measurements (“RUM”) and Synthetic Measurements). Use these external, outside-in measurements to complement what is done internally.

#- The Response Times of the different Performance measurements are relative to a number of factors (e.g. distance, geography, browser cache, versions, infrastructure, application, ISP, CDN). These factors may also be different for each web asset.

## Story

In this Webtortoise post, will be looked at the various Response Times of the Ask.com homepage (Thank you, Ask.com). Have chosen this page because:

01. The URL http://www.ask.com/ was easy enough to measure Synthetically and RUMally (is that a word?) ;

02. It has a good mix of both first-party and third-party asset/object calls ; and

03. It has a good mix of both cacheable and non-cacheable asset/object calls.

Screenshot of the Ask.com homepage (2012-DEC-05):

In this post, the RUM data comes from Google Analytics and the Synthetic data comes from Catchpoint (thank you Google and Catchpoint). The RUM settings have been filtered to Geography=United States and Browser=Internet Explorer. Have also taken the metric ‘names’ directly from each provider, so folks may reference respective definitions themselves.

This first chart is showing [RUM: ‘Page Load Time’] metric and [Synthetic: full ‘Webpage Response’] metric:

Should not be surprised to see the RUM Response Times are higher than Synthetic Response Times. Was curious, though, why the RUM times on occasion dipped below the Synthetic times. After looking around, found GeoDB to be the culprit.

This second chart is showing [RUM: ‘Server Response’] metric and [Synthetic: ‘Server Response’] metric:

Was a bit surprised the RUM times here were lower than the Synthetic times. After looking around, discovered the RUM ‘Server Response Time’ did not include redirect or connect times, where the Synthetic ‘Server Response’ did.

When looking at these charts, one could almost remove the Y axis values and look at the lines by themselves. Did the next value in the series increase, decrease or remain the same versus the previous value? If there was a change, was it sustained or was it transient?

Here’s where is considered the instrumentation of your Performance measurements, to figure what may cause the hills and valleys. Remember, “If you do not measure Performance, then Performance will not be measured”. May or may not always be able to tell why the Response Times change, but that’s part of the fun!

_The following is optional reading material._

#CatchpointUser #KeynoteUser #GomezUser #Webtortoise #Performance #WebPerformance #SiteSpeed

#RealUserMeasurements #RUM #SyntheticTests

# Response:

Hello! This #WebTortoise post was written 2013-FEB-14 at 02:30 PM ET (about #WebTortoise).

## Main Points

#- Various monitors and measurements can help assure Quality; Use them in creative ways.

#- The question, “What Time Is It?” is relative. So have a little fun with it.

## Story

When discussing Synthetic Test Runs or Real User Measurements, are often referring to either monitoring Availability or to measuring Performance (see, “Availability versus Performance“). These attributes are very powerful, valuable data on their own, but they may also feed into [things like] quality.

In this Web Tortoise Story, asked the question, “What Time Is It” of a handful of large websites. The catch: the question was asked from Catchpoint’s US-based Synthetic Node network and are able to see geography-based web services are not perfect!

Note in each of these below examples, a different website was used.

—-
Asked from a Synthetic Node in Atlanta, GA.

—-
—-
Asked from a Synthetic Node in New York City.

—-
—-
Asked from a Synthetic Node in Washington, DC.

—-
—-
Asked from a Synthetic Node in Los Angeles, CA.

—-

Additionally, some Synthetic Nodes were redirected to other countries! And still other Synthetic Nodes didn’t get any time at all (instead, they were given links to other sites giving the time)!

Now, the example of “What Time Is It” may not be the best practical example, but the underlying principle is paramount. That is, when used in creative ways, your various monitors and measurements may give you more than just Availability or Performance data.

Next up in the #CreativeUses series: Image Search and DNS Takeovers.

_The following is optional reading material._

#CatchpointUser #KeynoteUser #GomezUser #Webtortoise #Performance #WebPerformance

#CreativeUses #WhatTimeIsIt #AintNobodyGotTimeForThat

# Response:

Hello! This #WebTortoise post was written 2013-JAN-31 at 09:06 PM ET (about #WebTortoise).

## Main Points

#- An Arithmetic Mean will, for all intent and purpose in WebTortoise World, result in a higher value than its Geometric Mean counterpart. Relative to “faster is better” in web performance, might say an Arithmetic Mean is a pessimistic calculation.

#- A Geometric Mean will, for all intent and purpose in WebTortoise World, result in a lower value than its Arithmetic Mean counterpart. Relative to “faster is better” in web performance, might say a Geometric Mean is an optimistic calculation.

#- Define: What is a Percentile?

## Story

Had an opportunity to discuss which statistical calculation should be used when looking at Performance charts. The discussion summary goes something like this.

First, assume consideration for a central-tendency calculation. Then:

If, in fact, looking for spurious outliers, consider plotting the Arithmetic Mean average.

Otherwise, consider plotting either the Geometric Mean or the Median, as they are very good central-tendency calculations.

To start, see this XY scatter plot taken from a day’s worth of synthetic test runs. In this Story, are using data from Catchpoint’s US node network (Thank you, Catchpoint), measuring @ 3,500 times a day (about 170 per hour). Intentionally chose this webpage as it contained a third-party ad network having particular host issues (the waterfall data was invaluable for troubleshooting, but that’s a Story for another day).

Eyeballing the chart, notice the thick band of majority data is less than 5,000 ms (right around 1,500 – 3,000 ms) with thinner pockets and bands throughout. Also notice around between 10:00 AM – 02:00 PM, there were no measurements higher than around 14,000 ms.

Second, will take the above XY scatter plot and draw a bar graph representing the middle 25th-75th percentile range (See, “What is a Percentile”). The idea here is to show a middle range (which might better represent overall Performance) versus just a single line (which can sometimes ‘lie’ or misrepresent).

Third, using the same data from the XY scatter plot, overlay line charts showing respective Arithmetic Mean, Geometric Mean and Median calculations.

Critical thing to notice is the height of the Arithmetic Mean (Y axis) versus either the Geometric Mean or the Median. Notice how the Arithmetic Mean is, at times, either very near the upper limit of the middle range or, in some cases, even above the upper limit of the middle range! Now notice the Geometric Mean and Median are always comfortably between the middle range.

Other:

Notice the 12:00 AM and 07:00 AM hour’s Arithmetic Mean is above the Middle Range. Now, quickly glance back at the XY scatter plot to see the measurement data.

Notice the middle range for the 02:00 PM and 03:00 PM hours are smaller than other hours. Glancing back at the XY scatter plot, can see the thick band of measurement data is more tightly packed.

Last, want to give a fair warning when looking at these types of charts: The amount of the data will generally affect the height and patterns of the lines and bars. Do not be caught off guard if, for example, the Arithmetic Mean average is always above your middle range. This is a function of the amount of data.

_The following is optional reading material._

#CatchpointUser #KeynoteUser #GomezUser #Webtortoise #Performance #WebPerformance

#ExcelStatistics #ExcelXYScatter #ArithmeticMean #GeometricMean #Median

Older Posts »