Please see the attached image.
“Only two out of twenty tokens have correlations beneath this threshold, namely McDonald’s Corporation and Gamestop Corporation with respective correlation coefficients of 0,79 and 0,93.”
Buyse, J. (2021). Impact assessment of digital assets on securities markets [Universiteit Gent]. https://libstore.ugent.be/fulltxt/RUG01/003/010/051/RUG01-003010051_2021_0001_AC.pdf
Thanks for posting this, very juicy, let’s dive in 💎 🤙
Tokenised stocks are an alternative to their traditional counterpart. In order for tokenised stocks to be a perfect substitute the price of tokenised stocks has to be perfectly correlated to the price of the traditional stock.
Okay, so tokenized stocks only function as a proxy when the prices are synced. Gotcha, this is the underlying assumption at play here.
The analysis is based on 20 randomly selected tokenised stocks with a varying market cap and liquidity.
Okay so they looked at tokenized stock from a random sampling.
Only two out of twenty tokens have correlations beneath this threshold, namely McDonalds Corporation and Gamestop Corporation with respective correlation coefficients of 0,79 and 0,93.
Alright, the good stuff. Of the tokenized stocks from the random sample, two of them deviate greatly from their peers with their correlation coefficient*. Which is stat slang for saying that the underlying assumption of “perfectly correlated” is quite imperfect with 70% for McDonald’s and 93% for GME.
Price discovery is quite blurry in that 7% spread, presumably to the benefit of market makers and HFT and at the expense of price discovery for actual investors. Eg you want to buy a token (for some reason), so your exchange charges you the price of the real stock then delivers you a token and pockets the difference.
The conclusion here is that the GME is a poor tokenized stock because the correlation coefficient is an abysmal 93%; the 7% variance in price discovery is fucking embarrassing for the SEC’s mandate** for maintaining fair, orderly, and efficient markets.
-
*A correlation coefficient is a number between -1 and 1 that tells you the strength and direction of a relationship between variables. https://www.scribbr.com/statistics/correlation-coefficient/
Thank you for your analysis!
-
ITT: Tell me you don’t understand statistics and probability without telling me you don’t understand statistics and probability.
you don’t understand statistics and probability
Dude, the post is a master’s thesis from Ghent University. It is at least as statistically rigorous as any comment on Lemmy.
Oh a masters thesis? Oh WOW 😲😳!
If you don’t immediately see the issue in the above statement, then Google Pearsons correlation coefficient and spend a little time reading.
It should be glaringly obvious why the statement made is both true, and also utterly devoid of meaning. If it’s not, look at what a bonferroni adjustment is and why it’s important when calculating multiple statistics as is being done here.
The sources and methods are laid out in the thesis. The author, his advisor, the people who upvoted this, and I thought that this idiosyncrasy was meaningful. On the other hand, some sarcastic ding-dong on the Internet disagrees…
“Only two out of twenty tokens have correlations beneath this threshold, namely McDonald’s Corporation and Gamestop Corporation with respective correlation coefficients of 0,79 and 0,93.”
That rate of correlations being below that threshold, 1-in-20, is exactly the rate at which you would expect to get a false positive correlation at that cut off. Because they’re comparing multiple statistics (multiple tests), they need account for that, which is where a Bonferroni adjustment would come in into play. You have to account for the fact that you’re comparing multiple statistics simultaneously, so based on randomness alone, some will come as significant even when they arent. Posting random pages from a students MA thesis is, well, just kinda sad. I’m sorry you lack basic statistical literacy. I’ve given you some material you can use to correct that.
you don’t understand statistics […] spend a little time reading […] It should be glaringly obvious […] utterly devoid of meaning […] Posting random pages […] just kinda sad […] you lack basic statistical literacy
You are being rude, and this idiosyncrasy is significant. I will try to explain for you in simple terms. Although the price of a stock varies over time, at any given time, the price should be approximately the same across brokers. (And for tokenized stocks to substitute for non-tokenized stocks, then their prices also need to correspond.) When I buy a stock on Charles Schwab, then the price should be the same as when you buy the same stock on Fidelity. If you get a different price from me, higher or lower, then the price of the stock is wrong. No Bonferroni correction necessary. It doesn’t matter whether this happens for every stock or just one idiosyncratic stock. If the price is different, then the price is wrong.
When I buy a stock on Charles Schwab, then the price should be the same as when you buy the same stock on Fidelity. If you get a different price from me, higher or lower, then the price of the stock is wrong.
Tell me you dont understand how a market works without telling me you dont understand how a market works.
National Best Bid and Offer (NBBO) is a regulation by the United States Securities and Exchange Commission that requires brokers to execute customer trades at the best available (lowest) ask price when buying securities, and the best available (highest) bid price when selling securities, as governed by Regulation NMS.
Speaking as someone who is not an expert in statistics - next time, please include information on what is wrong (or being misinterpreted) in the initial comment. I’m sure we’ll all be able to better improve our understandings that way.
I did.
Was referring to this initial comment in case that was not clear: https://lemmy.whynotdrs.org/comment/1835267
Bit rude