Infinite Decay: Iron Condor Difference and PostgresSQL Code

A bit more progress since the last blog post. In the same way that I believe long-term consistent execution will trump brilliant “ideas” or hidden “secrets”, I believe the extraordinary emerges out of the ordinary done in a consistent process-oriented way. In this trade strategy, I’m aiming for consistent forward progress. A bit more thinking to get a slightly better understanding. A bit more coding every day to build up a large trading system. A bit more every day. Relentless forward progress.

A. How is this strategy different than outright short Iron Condors?

1. Beta Slippage
Must research if Beta slippage priced into option chain, but even assuming it is, can short both underlyings directly & eventually win from twice Beta slippage (of course, assuming sufficient time & margin). Perhaps the superior modified trade is short both underlying, with long cheaper far OTM calls for protection.

2. Decay vs. Mean Reversion
ETN decay is like the underlying in condors getting pulled towards middle, may be stronger than normal mean reversion.

3. Market Efficiency
Market may price condors more efficiently due to more obvious relation on 1 underlying vs. “synthetic” 2 underlying position.

4. To Investigate

  • Directly shorting 2 anticorrelated underlyings
  • Short options spreads on them
  • Short underlying with long cheaper far OTM for large protection

This may involve research into the liquidity of underlying and options chain for the bid-ask spread & volume,  higher moments of the option chain in time and strike, frequency and size of moves.

B. Connection to PostgresSQL

postgresql

Data from the Interactive Brokers API dumped to spreadsheets and uploaded to the Google Drive Folder: Infinite Decay are nice for initial data exploration and visualization, and are very useful to share with anyone I interact with, but more intense data ingestion (scrubbing, transforming, updating, staging, etc.), read/write operations, backtesting and analysis, and algorithm development will require a much more functional data management system. I’ve decided to use PostgresSQL (PG) as my database for data operations.

Latest commits to the public GitHub repo: https://github.com/postbio/infinite_decay

  • SQL scripts to create database tables for use for the ticker data and paired ticker data
  • Connection to PG in Python using the most popular psycopg2 adapter
    • The code references a local instance of a PG database, but it can very easily be adapted to a cloud hosted instance by simplying changing the host in the psycopg2.connect( ) function
  • Preliminary functions to load the tables with the data from the IB API and CSVs
    • Will write more general CRUD functions in a database worker class, just wanted to finish off making the first connections

Tech commentary: I like the feel of SQL queries better than NoSQL, since the query language is very expressive and lightweight. Technically and information theory-wise, it’s of course possible to use either and to replicate one in the other. What’s really cool is that PG has a data type called “json”, which allows storage of JSONs to emulate NoSQL. This is extremely useful to combine the expressiveness of SQL with the universality of JSON, for example, can build API endpoints using JSON with minimal effort and the open source search engine Solr is effectively a JSON / NoSQL content management system.

Author: postbio

Trading blog

Leave a comment