Saturday, December 29, 2012

Mind Over Money documentary



Interesting documentary around financial auctions and associated economic theories.
They attempted to replicate the famous $100 Bill Auctionhttp://matdays.blogspot.co.nz/2010/03/about-100-bill-auction.html at the beginning.

Monday, December 24, 2012

Excel data analysis 6.0 (Buy Low/Sell High simlation)

If we bought a unit of an index at the end of each down day, and sold on an up day, how would it do?
In this post we look at the raw EV (statistically expected value) of this idea, with excel, in a theoretical trade where we toss all risk management principles out the window.

Buy Low/Sell High -> short gamma

This is an implicitly short gamma trade, since position accumulate with each unfavorable move, increasing risk. Therefore the payoff is concave, and requires significant effort in risk management to implement.

Historical data

We will look at IWM (iShares Russell 2000 Index) daily prices for the period 5/26/00 - 12/21/12. So after sorting the adjusted closing prices, we work out the close to close log returns.

Sample stats:
















Trade signals
We can create the next column for buy/sell signals with respect to the latest dIWM. Here I used a threshold of 0.09%, median dIWM of the past 12 years; i.e. if the latest dIWM is above 0.09%, we sell a unit of IWM, and vice versa, where buy signal = 1, sell signal = -1.


Buy/Sell price simulations
This is pretty straight forward, if the signal = 1,  "Buys" column gets the latest price, if not, blank. Same for the "Sells" column.


From the average prices, we can see that this concept had derived an average of $0.72/share with each round trip, before transaction costs. OK, so it has been +EV.

What about the risks of position accumulation?

Position simulation

I created a column "Position", and it's simply an accumulation of the signals.





We can graph it against the date:

So from this we can see that some trades require more work around risk management to implement in a practical manner.

Saturday, December 22, 2012

S&P500 vs. Commodities (Oct. 2007 -> Dec. 2012)

Here we can look at how the S&P500 stock index has done in the past 5 years compared to a few commodities that adjust closely with actual, i.e. real rate of inflation (alternate inflation charts at Shadowstats). It gives a basic sum up of how the buy & hold guys have fared after real adjustment for inflation.

Data applied
All the data are derived from Yahoo Finance. We have the following commodity ETFs: GLD (SPDR Gold Shares), USO (United States Oil Fund), DBA (Powershares DB Agriculture Fund) against the Index ETF, SPY (SPDR S&P500). I have gotten end of day closing values of these ETFs going from 1/5/07 to 12/21/12.

Cumulative Log Returns


We can see that both agricultural commodities and gold have outperformed S&P500, which implies that stock indices are not likely to outpace actual rate of inflation, making buy & hold not all that attractive.

Daily log-return sample stats for the period, 1/5/07 to 12/21/12


 

Wednesday, December 19, 2012

Option delta off expected volatility (in Excel)

I think many of us have used the default delta values off implied volatility, and noticed that it can be off significantly when implied is way out of line against realized vol. This error makes the P&L of individual trades path dependent, and frequently hurt long volatility trades as delta becomes understated. A way reduce this error is to input realized vol for delta calculation, or 1 step better -> Expected Realized Vol.

Main advantage

The increased hedging accuracy would lower the volatility of individual trade returns, path dependance. It all goes toward achieving that positive EV:

EV = BSM(Realized Vol) - BSM(Implied) - transaction/hedging costs - implementation shortfalls
Where BSM() = Black Scholes Merton option valuation model.

Deriving delta


S -> Underlying value
K -> Strike price
r -> “Risk free” rate of return (often off 10year treasury)
V -> Expected realized volatility
T -> Option life with respect to year

We need to solve for d1 from the Black Scholes Merton model,

Then we can derive the “correct deltas”,
delta(Call) = N(d1)
delta(Put) = 1 – N(d1)
where N() = Position within the Standard Normal Distribution



Doing it in excel
So I used the latest SPY values for the example, and an arbitrary interest rate of 0.5% and realized vol of 12% (I didn't bother doing a forecast value since that's already covered in the earlier post).

Formulas:

We end up with



Tuesday, December 18, 2012

Top book list

Saturday, December 15, 2012

Delta hedged options, return estimate

So if we have a crude plan of delta hedging a long/short option position once a day, we'd like to know how far the underlying could go for us to make/lose money, as accurately as possible. Since gamma is associated with variance, and is a derivative of delta like acceleration off velocity; the return on a delta hedged options position is then like that of the distance formula from mechanical physics.

Distance formula: 
distance = (initial velocity)*(time) + (1/2)*(acceleration)*(time)^2

To translate this into a delta hedged options position

initial velocity -> delta
acceleration -> gamma
time -> change in the underlying

P&L of delta-hedged options
 









With a volatility forecast, the above gives us a pretty good idea of the position's EV (expected value).

Wednesday, December 12, 2012

+EV +EV +EV !

Interesting TED talk on achieving positive EV (Expected Value).

Dylan Evans - "What can we learn from expert gamblers?"

Saturday, December 8, 2012

Excel data analysis 5.1 (Realized Variance, Future Return Regression)

So in the last post (Excel Data Analysis 5.0) we looked at deriving Robust Realized Variance (RRV), today let's look at if we could use RRV to estimate a bias for future returns on the S&P500 off regression based interpolations.

So moving on with the S&P500 RRV worksheet, next to the 60day (business day) realized variance, we will create a column for the average future 60day log return of the S&P500, "dX.f60",

Here we can see that I took the average of the following 60days of S&P500 log returns. So naturally this column must end 60days prior to the date of our last available data.

Realized variance vs. future average returns regression fit

Next we highlight the columns "RVar(60)", "dX.f60" down to the last value in "dX.f60" and draw a scatter plot. On the plot, we then add a trendline with the highest R-squared (coefficient of determination) value.

The Polynomial Regression had the highest R-squared at 0.0307, it is not statistically significant enough to mean anything. However that doesn't mean this is the end of the road. We can see that the bulk of average daily returns have been increasingly negative about a quarter (60business days) following high realized variance. So what if we analyze only periods of high realized variance?

Here a new column is created for RVV above a certain threshold. The column is named "RVV+", and I chose cell G9 to place the threshold. Next to "RVV+", whenever the realized variance is below the threshold, this column gives a 0. Another new column is created for the following average 60day S&P500 daily returns, "dX.f60+" (matching "RVV+" values). The column "dX.f60+" gives a NA() whenever "RVV+" = 0.


I picked a random point in the realized variance range, 800 as a threshold for this example. Now, let's apply regression between "RVV+" and "dX.f60+",






It looks like a much better fit from the get go. The linear regression gave an R-squared of 0.7373, which is light years better than the earlier regression involving lower realized variance. This crude model could be extended, refined to estimate future index returns or develop risk management tactics, when the realized variance is relatively high.







Friday, December 7, 2012

Excel data analysis 5.0 (Robust Realized Variance)

Here I look at a couple of concepts around the S&P500 Robust Realized Variance (RRV), and its behavior with respect to the S&P500 index.

Why is variance important?
  • Implied variance of all strikes gives a much more complete feel for a product's option value, whether it's relatively cheap/expensive. 
  • Profit from delta hedging (long gamma) are proportional to returns squared, i.e.                            Delta Hedged P&L = dX^2 * gamma / 2 - transaction costs; where dX = change in underlying
  • Variance Swaps, Variance Futures have become quite popular, and these new product valuations are still relatively inefficient compared to the more developed products.

Data set up
I got the daily closing values of S&P500 off yahoo finance (symbol: ^gspc). The data is sorted with the basic stats at the top as mentioned in Excel data analysis 1.0. So in column C, "dX", we have the day to day log returns; 100*ln[X(t)/X(t-1)], where X = latest value, t = latest time step, in this case the day.

Robust Realized Variance (RRV)

Variance for a normal distribution is the average squared differences from the mean. Since we know that returns on stock indices are not represented by normal distributions, robust statistics is more practical. As Median Absolute Deviation is to Standard Deviation, our RRV utilizes median values instead of mean.

Here we can see that I took the 60 day RVV, annualized.



RVV stats



Here we can get a feel for S&P500 with respect to RVV.


It's pretty obvious that the index is more attractive for buyers when RVV is relatively high.

In the next post I will look at using RVV to analyze expected future S&P500 returns.












Wednesday, December 5, 2012

Fiscal Cliff 2012: Options Sentiment





With so much publicity around the coming US Fiscal Cliff event, is everyone expecting a drop in the financial securities? Let's find out what the equity option traders are doing.


The Options Clearing Corporation publishes Weekly Statistics Reports, and here we can get a feel for general sentiment of the small traders (1-10 Transaction Size), and the smart money (FIRM). As option premiums are a result of supply/demand, we can compare the buy open Call premiums against the Puts.

So for the week ending 11/30/2012, we see the following:

Small traders Open Buy Call Premium: $902,942,073.08
Small traders Open Buy Put Premium: $335,772,371
Call / Put Premium Ratio :  2.69

We can see that most of the small traders are relatively bullish and have a ton of upside bets going.

Institutional traders Open Buy Call Premium: $303,898,447
Institutional traders Open Buy Put Premium: $283,454,570.75
Call / Put Premium Ratio :  1.07

We can see that most of the institutinoal traders are relatively flat.


Tuesday, December 4, 2012

Excel data analysis 4.2 (Overnight Gap)

In this post we look at the change in price stats for overnight gaps, and its correlation to the sequential open to close price moves.

Historical data
I used IYR Open and Closing Prices for this example, starting from the split at 6/9/05. Daily prices are available at yahoo finance, see this post for data sorting. Once we have the data sorted, logged returns off Gaps "Gap" and sequential open to close price change "dOC" can be derived.



Correlation between "Gap" and "dOC" is done with the correl() function. Here I do a 20day moving correlation.

We end up with these stats


General analysis
Interestingly we can see that for the sample period, the average gap move was down, while the intraday moves had a net positive movement. With correlation near 0, it's unlikely that the overnight gap move offers any valuable edge for trading in the following session.

Thursday, November 29, 2012

Physical Real estate investment risks (even in New Zealand!)

A lot of old-timers I've met believe in brick-and-mortar real estate investments, mainly because of something along the lines of "(ignoring maintenance, tax costs, inflation) my house's value has grown so much since I bought it in 1982... " This has led some to blindly take on heavy mortgages under the assumption that it's an easy, risk-free, leveraged profit, i.e. an indirect bet that hurt guys like Lehman Brothers.

NZ real estate historical returns

NZ Reserve Bank

We can see that of the past 23 years, about 7 (30.4%) had average house prices move negatively. On a leveraged upside bet, this could hurt significantly.

Fungibility and risks in physical real estate

Because real estate isn't fungible, physical property traders must manage a number of risks that do not exist within liquid financial products.

Turnover of US real estate vs. Stocks, Property Derivatives Pricing, Hedging and Applications
  1. Turnover in real estate market is much lower than most security markets, hence forcing traders/investors to take relatively long positions.
  2. In market downturns, the lack of liquidity may force the investor to accept significantly lower bids, if any exists, than desired. 
  3. If financing terms change unfavorably within the mortgage period, the lack of liquidity may force the investor to accept significantly lower bids, if any exists.
  4. Physical real estate requires a ton of due diligence, a hefty implied expense at times
  5. Physical real estate often involves very high transaction costs, again forcing some traders/investors to take relatively long positions.
  6. The bid/offer spreads are significantly large. 
  7. An investor who manages a portfolio of physical properties today would likely find it difficult to shift exposure from one sector of the market to another for the purpose of risk management.
  8. It is usually not possible to take a short position in physical real estate, making market risk hedging very difficult. 
  9. Risk of asset mismanagement is inherent with any 3rd party business, fund (or an uncle who's an alleged expert).

Wednesday, November 28, 2012

Excel Data Analysis 4.1 (Day-of-the-week, OVX)

Here we look at the day to day change in OVX (CBOE Crude Oil ETF Volatility Index) with respect to day of the week. According to academic theory, theta decay progresses smoothly with respect to time; if that is true, then making theta over the weekends would be practically free money. Let's see if that is exactly what most of the USO traders are doing.

Data preparations
OVX daily data is available at wikiposit. I like the way the data is already sorted going from top to bottom. For excel analysis, we download the csv format.

Again, we create space at the top for sample statistics for a feel of the data (Excel Data Analysis 1.0). Then we can create a column for the change in OVX "dOVX", I don't think there's need for logged returns here since implied volatility is already a percentage value. So dOVX is simply the difference between the latest value and the previous day's, i.e. OVX(t) - OVX(t-1).


 and we get this:
So we can see that like VXN, OVX has an upside skew.

Day of the week test

Next we create a column "WeekDay" in D with the weekday() formula. Keep in mind the default value for Sunday is 1, so to keep the values more "user friendly" I had subtracted each cell by 1.

Then we can use the following columns (E - I) for each day of the week. We will label the headers of these columns by the day of the week, 1 - 5. Then we could enter the formula in a single cell, and copy/paste it into the rest of the cells all the way down.

Logic: we check if the current row's weekday matches the column header, if so, we record that day's dOVX, if not, the cell leaves it blank. The $D14 means as we copy/paste this, it'll keep this value in column D, and for E$1, it keeps these values in the top row (column headers) as we copy/paste them.


Once we copy/paste that cell all the way through to I741, we end up with this

So we can see that there is a pretty significant weekend effect here, that traders tend to sell on Fridays hoping to get that "free theta", and buying them back on Mondays.

A way to potentially exploit this would be to do the opposite, i.e. buy vol near the end of Fridays, and sell on Mondays. Is this an edge that is likely to persist into the future? I'd think so, since blind faith in academic finance is relatively strong in today's cultures. However, whether this edge would likely overcome transaction costs and other implementation shortfalls would require more analysis.

Tuesday, November 27, 2012

Excel data analysis 4.0 (Index Vol Trades)

Here we look at a multi-parameter index option trading logic, using MAD (Median Absolute Deviation) with linear Realized Volatility forecasts(Excel Data Analysis 3.3). Stemmed from Robust Statistics, I like MAD mainly due to the fact that financial data distributions are usually not Gaussian.

Theory
Mentioned in the VIX white paper, VXN is a weighted average of Out-of The-Money NDX option implied volatilities. We can then estimate that an edge gained on the VXN can be realized in units of vega. So in this case we will see about getting an edge in the next-day's change in the VXN.

Data
For this post we will use data off VXN (CBOE NASDAQ100 Volatility Index) and QQQ (Powershares NASDAQ100 Index ETF). QQQ daily prices would be needed for realized volatility and MAD. For this example we use End-of-Day values.  
Data Range: 1/23/01 - 11/26/12

Once we get the data off Yahoo Finance, the first thing we need to do is get both series on the same page. Then we must match the dates.
What I do is to create a temporary column where it gives a 1 if the dates match for that row, and 0 if not. Then a quick scroll down would let us catch mismatches and delete the unmatched dates.

Once the dates are matched, we could delete the second "Date" column, and sort the data so that the latest data ends at the bottom. As mentioned earlier (Excel Data Analysis 1.0), I then inserted some rows near the top for basic statistics.

Deriving MAD (Median Absolute Deviation)
We first need the day to day changes of QQQ, here named "dQ" in column D. Again we use logged returns for this, 100* ln[X(t)/X(t-1)]  (why logged returns?). I multiplied it by 100 so we don't need to use the % symbol.

Now let's get the MAD(20)

Since my data here starts at row 13, and I've decided to do a 20Day MAD, we will utilize a rolling 20row window off "dQ". We still need to multiply the value by Sqrt(252) for annualization.

1 way to use MAD(20) is to analyze the ratio, VXN/MAD, to estimate whether these options are cheap or expensive. I created "rto.M" in column F for this.




Fundamentally, we already know that as insurance, options are usually traded at a premium so to compensate option sellers for the risk. From the screenshot, we can see that VXN has been above 2x MAD roughly 50% of the time.

Next I added columns for the standard squared realized volatility "RV" and linear forecast "FcRV" explained in an earlier post (Excel Data Analysis 3.2). We end up with this:

Multi-parameter strategy set up

We will need separate sections for longs and shorts. Since the logic/formulas are the same, I will only do the shorts here. The basic idea is to derive signals, and match them with the following day's change in VXN.

Since we have 2 parameters, "rto.M" and "FcRV", we will need a 3rd column for the signals.

(FcRV), here we see if forecast volatility (FcRV) is greater than current RV by at least the threshold in cell K13. I used "0" for threshold so that we get a signal off this filter for every upside forecast. It gives a 1 for signal, blank for no signal.

(rto.M), here we see if the VXN/MAD (rto.M) value is above the threshold value in cell L13 so that we don't enter trades when options are too cheap. I used "1.6", the lower quartile value. It gives a 1 for signal, blank for no signal.

Signals, so basically this checks if the sum of the above signals are 2; if so a trade signal is given, if not it is blank. The "Count" cell in M10 gives the total number of signals here.

Deriving average returns, equity curve

Next we create a column for the future 1Day change in VXN, "dVXN". Note that in row 32, we used data from row33, therefore this data reaches down to 1 day before last, i.e. 2 trading days ago.

In column O then we could estimate our VXN edge, or units of Vega/Trade "Vega/T". We use the if() function, basically if a signal is given, we look at dVXN, if not, it's a blank.

For the equity curve, "Eq.Curve", it adds to its latest value by the Vega/T if a signal is given.

We can see that this edge offers the option trades an average of 0.17Vega / trade, before transaction costs.
With the signal count, it roughly gives a trade every 3 days or so.


I prepare to print the equity curve with the 2-D Line chart. Make sure you also have the date column highlighted, to have the date in the chart. To do this, you can highlight 1 column, then press Ctrl and click the header of the 2nd column. So here is the estimated Vega possibly earned cumulatively off the above signals BEFORE transaction costs.

Historical performance stats 

Win%: This cell checks if there's a signal, if so, if it's a winner it gives a 1, if not a 0.

Wins: Here if the signal is a winner, we record it.

Losses: Here if the signal is a loser, we record it.

W/L Ratio: This gives the average Winner/Loser size ratio by taking the absolute value of the (average Winner)/(average Loser)




Let's see the stats

So the numbers are encouraging. However one must keep in mind that these estimates are before transaction costs and considerations of risks around liquidity, execution precision, and etc. Most of all, one must realize that the worst losing trades are never behind, but rather ahead of us in the future. Therefore to apply this logic for actual trading still requires work around effective risk and implementation shortfall management.

Feel free to contact me for questions, advise around excel modeling, trading. 






Friday, November 23, 2012

The EMH economist and the $100 bill

EMH: Effiicient Market Hypothesis



An economist with a student walk down the street, and come across a $100 bill. The student bends down to pick it up; yet the economist stops him, "don't bother, if it's a REAL $100 bill, somebody else would've taken it already."

Excel Data Analysis 3.2 (Volatility Forecasting off Linear Regression)

Today we look at forecasting RV( Realized Volatility) using linear regression for stock/index options trading. Generally speaking, an edge in estimating future realized volatility can be realized in units of vega off traded options; see Derman's paper, Trading Volatility, for details.

I'm still using the IYR data, and have updated the data (manually) up to Nov. 21, 2012.

Setting up linear regression

We need to match each realized volatility value from each day (row) with the future volatility, in this case I've picked the next 10 day. The calculation is exactly like the 20Day realized volatility, just instead of the past 20 days squared returns, we're using the coming 10days into the future. Logically, this column completes 10 trading days prior.

The future 10 day realized volatility is placed in the column F, RV.F10

 
Next we highlight the columns going from (E10, F10) down to the last row of RV.F10 at (E3127, F3127), and use the scatter plot.
Great, now the graph is up, we will have the 20Day RV on the X-axis, and RV.F10 on the Y-axis. Onto the regression fits, we right click the data on the chart and choose "add trendline". In the trendline menu, we want "Linear" type, and display "Equation on chart", "R-squared value on chart".

Remember that awesome Slope-Intercept Form, Y = mX + b, back from basic algebra? Well here it is again in the equation. As for the R-squared value, or Coefficient of Determination, the higher it is the better the equation represents the forecasting relationship.








Adaptive slope/intercept  values can be utilized as we update the excel worksheet day to day. This is so we wouldn't need to plot a graph every time for these values.


And we can see that the numbers are identical.











Great, now we can create a column, G, for realized volatility forecast values, I've named it FC(RV,10). Each day's next 10Day RV forecast would use the latest day's 20Day RV with the slope + intercept values.

So let's see how much of an edge this model offers us.

We can a column in H, for the day to day change in RV, named dRV. Then in columns I, named RV.u(FC) we can check each day's change in realized vol (dRV) if the previous day's forecast was higher than the previous day's ending RV. Vice versa for column J.

Note that in the if statement, I've used "" for value if false. Basically, it either gives the dRV value, or leaves it blank if false.



We can see from the average values that both directional forecasts had a slight edge, with the downside slightly more. Therefore in actual trading, I would not rely on this edge alone, but use it as a baseline bias for a feel of coming volatility.

Wednesday, November 21, 2012

Excel Data Analysis 3.1 (Volatility Filtering)



So here we apply a realized volatility (RV) filter as a watered down, simple form of quantitative risk management, to hopefully lower the negative swings of an invested asset. Let’s use the REIT, IYR again for this example. 

Cumulative Returns
Let’s create a column in our spreadsheet for Cumulative Returns, Rcumu, where we add up all the day to day log returns. Excel formula:


To chart a buy and hold return of IYR without having to include quartile numbers, we can scroll all the way down, highlight cell “A10”, then hold Shift and left click cell “B3134”.  Then at the top go to Insert -> Line -> 2D lines. 


And we can see that return off a purely buy and hold scheme would have experienced extremely high volatility. 


Let’s see if we can reduce the risk with a RV filter. The idea is to make it so that no position is held if the RV becomes greater than a certain threshold. In the example worksheet, I’ve used the cell “I3” for the threshold, and used 50% as a starting point. Created a new column, Rcumu*, for the cumulative returns based on the volatility threshold. 

Excel formulas  
 










Now we can compare the cumulative returns between with and without the 50% volatility filter.

And we can see that for this asset, in the sample period, the volatility filter reduced the buy and hold unrealized swings (risk), and increased over all expected return over time.