Quant Basics 3: Sharpe and Drawdown

AI, Quantitative Analysis and Data Science Solutions for Finance and Manufacturing.

Quant Basics 3: Sharpe and Drawdown

August 19, 2017 Quant Basics 0

Sharpe Ratio

In the previous sections of Quant Basics we looked at producing data sources and how to write a vectorised backtest. We also calculated our first metric – PnL and tested its functionality. In this section we will add two more metrics that are very important for strategy evaluation: Sharpe ratio and drawdown. Let’s start with the Sharpe ratio which is also called the risk-adjusted return. Basically, we divide the mean of our returns by the standard deviation of the returns (risk). So, if we have two PnL series with the same final PnL and in one all the money is made in one single time step whereas in the other we have a consistently upward trending PnL curve, the latter will give us a much better Sharpe because the standard deviation of the returns would be much lower. What this means is that we get the same return for much less risk since we have lots of little events contributing to the overall PnL whereas in the first example we have one big step. If this big jump does not happen or even goes in the other direction, we are in trouble.
Let’s have a look at the code for the calculation. We have already produced a PnL-curve in the previous section and this is what we are going to use now.

def calc_sharpe(pnl):
    retsx = diff(pnl)
    retsx = retsx[~np.isinf(retsx)]
    sharpe = np.nanmean(retsx)/np.nanstd(retsx)*np.sqrt(252)
    return sharpe

In this code snippet we calculate the Sharpe based on the absolute returns. This is chosen over relative returns to avoid inflation of the Sharpe due to initial (very small) PnL values. Note, that the Sharpe ration is only a good metric to compare similar strategies. For strategies that are different, especially if they work on a different time granularity, the Sharpe ratio can give very skewed results.

Another important fact to take into account when using the Sharpe is entropy. Normally, we assume that our price data are shuffled thoroughly, i.e. they have high entropy. However, what happens if this is not the case? In the extreme case our returns would be sorted from the smallest to the largest. Let’s do this and compare the normal (shuffled) and the sorted case with the code below:

import numpy as np
import pylab as plt

    k = np.random.randn(1000)+0.08
    p = np.cumsum(k)+100
    p_sorted = np.cumsum(np.sort(k))+100
    print np.mean(k)/np.std(k)*np.sqrt(252)


The result looks like this where the blue line is the shuffled case and the green the sorted case:










Both of them have the same Sharpe ratio of 1.95 and the same final PnL. However, in the sorted case the strategy would blow up very quickly. This is, of course, a very artificial example but it shows very graphically that this metric has its limitations. It is very important to be aware of this.


The example above shows clearly that it is absolutely necessary to have additional metrics that will help us to characterise a strategy. A very important one is drawdown. It denotes our loss after our strategy has reached another high point in PnL (high watermark). Drawdown can be calculated in relative or absolute terms. In the example below we calculate it in absolute terms, in order to be consistent with further sections.

def calc_ddwn(pnl):
    hwm = 0
    ddwn = []
    for i in pnl:
        if i>hwm:
            hwm = i
        ddwn.append(i - hwm)
    return np.nanmin(ddwn)

Looking at the code you will note that it contains a loop, unlike the vectorised examples for Sharpe and PnL presented earlier. If we were to vectorise this we would need a rolling maximum function. This could be build into a Pandas data frame but in this case I leave you with the looped form for clarity.

Lets test the function:

def test_ddwn():
    k = np.random.randn(1000)
    ddwn = calc_ddwn(np.cumsum(k))
    plt.title('drawdown: %s'%ddwn)

Running this on our random price data we get the following plot:

We note a drawdown of approximately -46 dollars, which is confirmed by the graph.

Now we have three metrics to characterise our strategies: PnL, Sharpe ratio and drawdown. In the next section we bring it all together by running and analysing a single backtest, which sets the scene for running parameter sweeps with hundreds or thousands of parameters sets for this strategy. This will be shown in the following sections.

The code base for this section can be found on Github.