• Home
  • Portfolio
  • CV
  • Blog
  Jake Hanson

My Experience with Integrated Information Theory (IIT)

6/24/2020

19 Comments

 
During my third year of grad school, my advisor asked if I was interested in contributing to a special edition of the journal Entropy on the topic of Integrated Information Theory (IIT). My understanding at the time was that IIT was gaining traction as an interesting theory of complexity with a rich mathematical framework, so I welcomed the opportunity. Not only that, but the complexity measure "Phi" at the heart of the theory was a candidate measure of consciousness, meaning that the information-theoretic thing that IIT quantified was supposedly one and the same with subjective experience.

I was immediately skeptical that any scalar mathematical measure could quantify consciousness, as I'm sure anyone not well versed in IIT can empathize with, but it seemed like a lot of people, with a lot more experience than me, were totally on board with Phi as the answer to the hard problem of consciousness (Max Tegmark, for example). Thus, I put my initial doubts aside and decided I had to dig through the details of the theory before I could assess its validity.

Early on, I was captivated by the spirit of IIT. My biggest concern was how one can go from math to consciousness, and IIT speaks directly to this. The basic idea is that if you want to measure consciousness, you have to start with a phenomenological understanding of "what it is like" to be conscious and, from this, "derive" the properties of physical systems that can instantiate this phenomenology. For example, our left and right visual field are experienced as a single "unified whole" which means information from both eyes must be shared at some point to account for this experience of a unified visual field. Based strictly on this idea, one can posit that physical systems that lack the ability to exchange information necessarily lack consciousness (or take part in two isolated conscious experiences) as the ability to physically "integrate information" seems necessary in order to generate the phenomenal experience of a unified whole. And, in this way, the mathematical formalism of IIT is built.

 Mathematical Problems with IIT

Unfortunately, my honeymoon phase with IIT was short-lived. Qualitatively, the theory is great, and there is even an extensive vocabulary invented to go hand-in-hand with the mathematics of the theory ("qualia spaces", "autonomy", "agency", etc.), but the actual math behind these words is horrendous. First, the process of calculating "Phi" (IIT's measure of consciousness) is a nested optimization inside of a nested optimization inside of yet another nested optimization. At each step, one applies the axioms of the theory in order to calculate a local phi value ("little phi"), then one compares these local phi values to each other in order to get a mesoscopic phi value which is then compared to the other mesoscopic values and so on and so forth. Keeping track of all these phi values is extremely tedious to do by hand, which is why I doubt many people have ever actually calculated Phi ("big phi").

​In fact, I have a better reason to doubt this, as it was a little known fact at the time that if you do try to calculate Phi for even the simplest possible system (e.g. an AND and an OR gate connected to each other) you will find that it's impossible. The reason for this is that the axioms of IIT do
 not address what to do in the event that there are degenerate local phi values. In particular, the exclusion axiom states that as part of the optimization process, you must choose the lowest phi value as the "core cause" (another bit of vocab) for a given "mechanism". But, if there are two different core causes with the same phi value (an extremely common occurrence) then IIT does not specify which core cause to choose and your final results are extremely sensitive to this choice.
Picture
Figure 1 - One of several places in the PyPhi source code where degenerate values should be explicitly considered but aren't.
Consequently, in calculating the Phi value for a simple AND/OR gate circuit (loosely analogous to a brain with only two neurons), I found that there were 33 different Phi (big Phi) values associated with different choices for the core cause/effect, and these values span the entire range of possible Phi values for the system. Thus, Phi is completely undefined. I dug into the PyPhi package, which is what everyone uses to calculate Phi in practice, and found that, sure enough, it just randomly grabs the first degenerate Phi value, rather than comparing them in any sort of principled fashion (Figure 1).

At this point, I started to have serious doubts about the validity of IIT. Here we are, twenty years into a proposed theory of consciousness based on a mathematical measure called Phi that isn't even defined! I was blown away by the fact that IIT was so popular, yet no one talked about the fact Phi isn't unique and couldn't actually be calculated. I started to believe that perhaps all the rhetoric surrounding Phi was responsible for its popularity and that the actual physical underpinnings were nonexistent. In other words, I started to think this whole theory might have nothing to do with reality. Given that this is the most popular theory of consciousness in contemporary neuroscience and has been growing exponentially over the last two decades (Figure 2), this was not a heartening thought for a graduate student with no first-author papers to have, especially under a tight deadline.
Picture
Figure 2 - The exponential growth in popularity of IIT over the past two decades. Data was gathered from the SCOPUS database and includes citations to all articles with the name of the theory in the title, abstract, or keywords.
I was now six months into learning IIT and felt that I needed to publish something soon in order to justify the amount of time I'd spent learning the mathematical formalism of the theory. Unfortunately, the only "result" I had was that Phi could not be calculated at all, which was not an easy result to submit to a special edition devoted entirely to IIT. Not only that, but there was at least one relatively obscure case in the literature where this problem was mentioned and therefore the result was not even new. Had I been more skilled in the art of writing papers, I could probably have pulled this paper off but the details were extremely technical and I wasn't entirely sure what the take-home message was. In addition, I didn't like the idea of a strictly deconstructive contribution with no real remedy to the problem, though I have since changed my mind on this matter. Regardless, I went about looking for a better way to point out the mathematical problems with Phi, hoping for a simple remedy.

Epistemological Problems with IIT

The simple remedy did not exist, as all I could manage to prove were increasingly better reasons why IIT must be fatally flawed. In particular, IIT assumes that physical feedback is a necessary condition for consciousness, such that any circuit/brain that lacks feedback necessarily lacks consciousness. Yet, there is a theorem in automata theory that states anything that can be done with feedback can be done without feedback simply by "unfolding" the feedback connections present in a circuit (the Krohn-Rhodes theorem). In other words, from an engineering perspective, there is absolutely nothing special about the presence of feedback - you can always get rid of the feedback in favor of strictly feedforward logical connections. In light of this, I began to wonder what happens to the Phi value of a system under Krohn-Rhodes decomposition. In theory, it should be possible to construct two different circuits/brains that execute the exact same input-output behavior (philosophical zombies) with and without the presence of feedback. Since feedback is a necessary condition for consciousness in IIT, this implies that one of these circuits would have to be conscious while the other is not.

I went about constructing simple examples to explore this idea, proving that it is always possible to fix the (outward) function of a system while changing its (internal) Phi value. Thus, Phi is completely decoupled from function. It had now been another six months since starting this project,  and I thought this would have to be good enough for a paper. Having shown that Phi has nothing to do with input-output behavior, it seemed to me that whatever Phi claims to be measuring, it can't be justified in terms of subjective experience, as one must simply assume that a difference in subjective experience exists in absence of additional functional consequences - an assumption that can't be tested. 

I started writing up these results but was plagued by the fact that IIT explicitly addresses the existence of functionally identical systems with different Phi values as part of the 2014 formulation of IIT 3.0. In other words, proponents of IIT were well aware of the fact that philosophical zombies could exist and were somehow completely OK with the idea that what justified the difference in subjective experience (measured by Phi) was Phi itself. Thus, it seemed my contribution was perhaps an interesting way of constructing these systems but something proponents of IIT would easily brush off as inconsequential, as they openly admit that the theory embraces such systems.  The deeper issue at hand was one of epistemic justification. How is it that proponents of IIT could justify that what Phi is measuring is in fact consciousness? In the absence of functional differences, how can one say that a zombie system lacks phenomenal properties such as a "unified experience" without simply assuming it to be so? It seemed that the rhetoric being used to justify Phi as a measure of consciousness was grounded entirely in input-output behavior but, if the input-output behavior was fixed, Phi became its own justification scheme in that it was used as both a means and an end. 

There was no place this problem was more readily apparent than experiments designed to falsify IIT in a laboratory setting. According to its proponents, if Phi was shown to increase in response to behavioral states we commonly associate with lower levels of subjective experience (e.g. sleep) then the theory was falsified. Yet, the logical validity of this entire argument is based on the premise that outward behavior is an accurate reflection of internal subjective experience. In other words, for this experiment to validate/falsify IIT one must believe the assumption that when a system appears to be asleep it objectively has a  lower subjective experience than when a system is awake - the same assumption that proponents of IIT reject in defense of philosophical zombies! Thus, it seems proponents of IIT wanted to have their cake and eat it too. Experimental falsification was one of the reasons for IIT's meteoric rise to fame and, indeed, multimillion dollar efforts are still underway to test IIT in a laboratory setting. Yet, the Krohn-Rhodes theorem guarantees that whatever the results of these experiments are, it is possible that the opposite results exist, as what is being measured internally has nothing to do with the input-output behavior of the system. ​Thus, there is no reason to experimentally test IIT, as it is already falsified a priori...

Resolution

Fortunately, at the same time I was submitting my paper on Krohn-Rhodes decomposition, the much more popular "unfolding argument" was published by Doerig et al., which essentially articulated the same points I was trying to make but with fewer technical details and a much cleaner narrative. In short, the unfolding argument is that feed-forward neural networks (NNs) can realize the same input-output behavior as recurrent neural networks, and therefore one can fix the input-output behavior of a system while changing its Phi value (the primary difference between their work and mine being the use of NNs instead of deterministic finite-state automata). More importantly, Doerig et al. clearly articulated that this implies one of two possibilities: either the theory is falsified due to the fact you can get arbitrary Phi values for fixed behavior, or the theory is inherently unfalsifiable (if one insists that Phi can be used to justify the difference in subjective experience under fixed input-output conditions). It was this latter implication that I had really struggled to pin down. I was aware of the fact that Phi was changing without clear justification in terms of behavior, but I didn't clearly recognize that this meant the theory is metaphysical if one continues to insist that Phi is the true solution. In other words, I was trying to figure out how to convince believers that Phi is incorrect when in reality the best I could I ever do is convince them it is unscientific.

Generalizations of the unfolding argument quickly followed, in which the role of inference was clearly defined [Kleiner and Hoel, 2020]. Crucially, what is needed to test a theory of consciousness are results from an independent inference procedure, such as the inference that sleep is indicative of lower levels of subjective experience. This inference must be made independent of any theoretical framework and used as the benchmark to which predictions from a given theory are compared. If the prediction from the theory doesn't match the results from the inference procedure (e.g. the theory predicts high consciousness when asleep and low consciousness when awake) then the theory is falsified. Furthermore, if one assumes that independent inference procedures are based on input-output behavior such as sleep (an assumption that seems unavoidable) then the ability to vary the prediction from a theory of consciousness under fixed input-output automatically implies the theory is falsified as at least one of the predictions is logically guaranteed to disagree with the results from the inference procedure.

With these new results (formalization of unfolding and falsification) in hand, I was able to ground everything I had done in terms of these increasingly familiar formalisms. For example, I could now prove that the Krohn-Rhodes theorem falsifies any theory of consciousness that assumes feedback as a necessary condition, such as IIT. Thus, I was finally able to translate the intuition that motivated my original line of arguments into concrete mathematical proofs - solidifying to myself that IIT is indeed ill-fated. ​

On the Future of IIT

Going forward, it seems the need for an independent inference procedure based on input-output behavior is a major epistemological concern that theories of consciousness must contend with. If behavior is the ultimate arbiter, then theories of consciousness must be invariant with respect to fixed input-output behavior if they are to avoid a priori falsification. In other words, falsifiability boils down to what we can infer from behavior, and what we can infer is pretty much limited to our "folk psychology" understanding of what behaviors are and are not associated with consciousness. For this reason, I do not think the future of consciousness is bright, as rich mathematical theories must be given up in favor of behaviorially falsifiable theories - a throwback to the Turing test.

As a case study, however, IIT remains extremely interesting to me. How is it that the theory is so popular given that it is both quantitatively and qualitatively so poorly defined?  Even in light of the unfolding argument and mathematical proofs of falsification, I don't see proponents of IIT giving it up any time soon. I have met many of them in person, and for whatever reason IIT seems to be a disproportionately large part of their identity - certainly much more so than any other theory I've come across. Of course, this is not unanimous, and I have found plenty of people on both sides of the debate willing to discuss IIT with an open mind, but there is certainly a core of proponents of IIT that talk of constellations of concepts in qualia space with an air of superiority that makes you feel like they must know something you don't in order to justify such a seemingly strong belief in their theory.

But, the math doesn't lie and I'm now convinced that this is some sort of social or psychological phenomenon in which the culture of IIT attracts believers for reasons that are anything but scientific - perhaps by promising an answer to one of the most difficult existential questions. While I don't personally like or approve of this approach to science, I can't help but recognize that one of the reasons we have come so far in formalizing the epistemic problems surrounding consciousness so quickly is due to the fact that seemingly solid arguments against IIT did nothing to detract from its fan base. Had I been the inventor of IIT, the existence of philosophical zombies would have been a deal-breaker for me, as there are strong logical indictments against any theory that admits them [e.g. Harnad 1995] and IIT is clearly not immune to these arguments.  But IIT refused to believe these indictments and, in doing so, it forced stronger and stronger logical arguments out of those who dissent. Thus, IIT's inability to easily be thrown away insisted that the best possible arguments be brought forth against it which, in their own right, apply much more generally than IIT.

In light of this, I am curious to see what happens to IIT in the next five years. Ideally, I would hope to see an abrupt decline in the use of Phi and incline in the emphasis of problems associated with the epistemology of consciousness, signaling the acceptance of the unfolding argument. However, I am not naive about the institutional inertia behind this theory. Not only are millions of dollars being spent to test it in a lab, but dozens of graduate students, postdocs, and professors have devoted significant time to pushing this theory forward in some form or another. To accept the notion that one must go back to step one (framing the problem) after so many years of hard work is a difficult thing to do. This, in combination with the historical tendency for proponents of IIT to pivot rather than truly address contradictions in the theory, makes me think it is equally likely that Phi lives on under a different mathematical guise with equally fatal problems buried under a mountain of confusing jargon. If this is the case, I probably will not give IIT the benefit of the doubt again.
19 Comments
James of Seattle link
8/14/2020 11:48:19 am

Thanks for this awesome review. I’ve been trying to follow the recent papers about the unfolding stuff, and Hoel’s paper, but having read this post I feel much better about my shorthand summary:
1. Any theory that is testable will necessarily rely on measuring behavior of some system
2. Any given behavior is independent of Phi
3. Phi is not a testable value for a given theory.

That said, I would not throw out the entire theory and start from scratch. I think IIT has a lot to say about how Human consciousness works. For example, I think the concept of “qualia space” is real and important. See, for example, Chris Eliasmith’s Semantic Pointer Architecture. You just need to jettison the sillier concepts, like Phi *is* consciousness, as opposed to a correlate of a degree or “amount” of consciousness.

*

Reply
JAKE HANSON
8/14/2020 02:08:48 pm

Hi James,

Your summary is great! That is exactly the line of reasoning I was trying to convey. You may be right that IIT should not be jettisoned all together. As an outsider, I admit that I don't come close to fully understanding the merits of IIT within the field. All I was really trying to do was understand the theory so I could apply it, but I ended up going down an epistemological rabbit hole.

Best,
Jake

Reply
Isaac
8/14/2020 12:28:26 pm

I have mixed feelings of this read.

The particular choice of optimisations in IIT's formalisms also felt somewhat arbitrary to me, not being uniquely derived from the axioms. For instance, why take the Minimum Information Partition as opposed to computing little phi from all causes?

That said, I am very optimistic about IIT. The fact that the math is evolving to become more bullet-proof is a good thing; a lighthouse in a sea of mediocrity in all the philosophical and neuroscientific literature on consciousness. Adam Barret, a mathematical physicist, has written more extensively about the theoretical problems that bar IIT from being well-defined: the choice of distance function is arbitrary and makes results very different, it is only defined for Markovian systems, more generally it needs a continuous-field generalization to be more applicable and more fundamental. That said, Barret means that as constructive criticism, and he actually recognizes that even cruder versions of IIT have more empirical achievements under their belt than rival theories.

I'm also amused by the disparity of complaints regarding IIT's math. I was directed here by Matthias Michel, a philosopher who actually complains because he finds the math too advanced. In the end, we must keep in mind that Tononi was primarily trained in medicine and psychiatry. I find it noteworthy that he's taken IIT this far into formal and philosophical territory.

As to the verificationist quibbles, I am afraid such problems are inherent to the problem of consciousness. Any theory will be faced with the problem of not being able to observe a system's putative subjectivity in order to falsify its ideas, except in the case of self-experimentation. You seem to be unaware of the mountains of objections to mere input-output behaviour as a model for consciousness, which was one of the motivations behind IIT from the onset. Leave the Krohn-Rhodes theorem aside, the simplest Turing complete system, say a 110 rule, is capable of implementing any input-output function, despite being ontologicaly and causally extremely simple.

I'm unconvinced that differences in phenomenology given functionally-equivalent mechanisms are unfalsifiable. Perhaps as David Chalmer has suggested in the fading-qualia and inverted-qualia thought experiments, it would be very unlikely for a conscious human being to experience a change in its qualia which it utterly fails to notice. Perhaps an experiment could try to substitute a module with a functionally equivalent counterpart that changes the causal structure according to IIT.

Avoidable or not, I also object to the idea of abandoning one of the most promising hypothesis to _only_ concentrate on an epistemological conundrum, those sentences of yours come across as extremely ideological and reactionary... not to mention unproductive. Hoel and Kleiner would actually agree to a more pragmatic approach to testing the available explanations. Standard physics is plagued with epistemological conundrums, yet we don't see people argue against the explanatory power and practical utility of quantum mechanics. Heck, you could even go as far as complaining about all of math based on Goedelian tantrums.

For instance, any theory of consciousness will have to explain the neurological datum of some input-output systems being conscious whereas others aren't (or assuming zombies don't exist: why they are separated from one another, even though they could be perfectly modeled as a single computational system). The fact that IIT is able to explain this in a rather principled way that preliminarily matches the data is a feature, not a bug. Prediction of zombies cannot be used as a test against any theory, for IIT doesn't attempt to test this. Rather, zombies pop-up as an extrapolation after matching the theory to the neuroscience data and pondering that IIT may be correct.

Reply
Matthias Michel
8/15/2020 09:00:48 am

Hi,

I wasn't going to post anything here, but since I saw my name I feel like I should say a few words. First of all, Jake I think this is a great post.

Second, I'm not sure where you've read that I find the maths in IIT too advanced. I just think the maths can give the impression that the theory is more developed than it actually is, especially to people from outside the cognitive neuroscience of consciousness. A bad idea wrapped in beautiful maths is still a bad idea.

Meanwhile there's absolutely no empirical evidence that integrated information correlates with consciousness. Without empirical evidence to inform the theory, it'd be a miracle if the measure of integrated information just happened to accidentally measure consciousness. No amount of maths can change the fact that what we need are not axioms, or postulates, but good old empirical evidence.

Cheers,
Matthias Michel

Reply
JAKE HANSON
8/14/2020 04:33:50 pm

Hi Isaac,

I appreciate your reply; you make a lot of important points. To be clear, I think that the math of IIT may one day be free of major problems, to the extent that we can all agree what it means to "measure Phi" in a system.

The verificationist quibbles are what I can't get over. I'm fully aware that there is a deep literature on this topic and that behaviorist theories suffer from major epistemological problems as well, such as the Chinese Room argument or the Rule 110 example that you provide. That said, I don't think that problems with behaviorism do anything to mitigate the problems with IIT. It is an unfortunate reality that it is much easier to prove what consciousness isn't rather than what it is. So, to some extent, I sympathize with those who try to push IIT forward despite its logical flaws but, on the other hand, I think the theory needs a redeeming quality in order to justify the time and energy that is spent on it. In quantum mechanics, for example, non-locality and the need for renormalization are major epistemological concerns, but the theory makes empirical predictions that justify its continued use. With IIT, I don't understand what the justification is that has lead to its popularity, and, from an outside perspective, it seems the only thing that makes sense is the extravagant rhetoric that it can't actually back up. I could be wrong, however, so please feel free to share what I'm missing.

Best,
Jake

Reply
Adam Barrett
8/17/2020 09:43:33 am

Hi, Adam Barrett here. For me, it is just an extremely compelling idea that consciousness *is* the information intrinsic to a physical system. I.e, that which is independent of the frame of reference of any outside observer. If we could put that statement into maths, it would be amazing! Nobody has convinced me it can't be done in theory, but at the same time I don't think we are anywhere close. If we ever do achieve it, the unfolding argument might disappear. Testability of such maths will always be hard though. I liken the endeavour to string theory in that regard. The difference between IIT and string theory however, is that there are several versions of string theory, and *zero* viable IITs. Wouldn't it be amazing if we did have the first viable mathematical formula for mapping between physical states and phenomenology!

Reply
James of Seattle link
8/17/2020 11:10:43 am

Hey, Adam. While I think I agree with your sentiment, I would warn against statements like “consciousness *is* X”. The term “consciousness” is like the term “life” in that it has a fuzzy description. No one says “Life is X”. Instead, they talk about life-associated processes, and try to decide which ones are necessary and/or sufficient.

That said, there is an argument that information is a necessary component of a consciousness-type process. More rigorously, mutual information, which is potentially mathematically calculable, might be such a component. For example, an “experience” may be a two-step process wherein the first step is the generation of a representation vehicle. This representation vehicle could have (physical) mutual information with respect to some external physical system. The second step would be a response to this representational vehicle, and this response would determine the “meaning” of the experience. Any given vehicle will have mutual information with respect to more than one system, but the response would determine which external system is being responded to, or, “experienced”.

*

Reply
Robert Kybird
9/28/2020 03:29:29 pm

feedback is normally a stabilising concept enabling sustainment of a given signal (information flow). however individual neuron activity quite easily saturates and becomes tired. feedback can only accelerate that process. What is needed therefore is a parallel channel mechanism that can maintain a given percept but move it to a different subset of similar neurons in order to maintain the informational content. This is a re-entrant rather than a feedback per se. It is at the very basis of rehearsing the content of our short term memory, which must hold such information that can or may be integrated.

Reply
Borysław Paulewicz
9/29/2020 11:00:00 am

Sooooo gooood. On top of that, you are also a hiker?

Cheers

Reply
Paul L Nunez link
1/28/2022 02:29:25 pm

I just discovered this excellent article, and will add this reference to this post:

In Psychology Today, I suggest that as currently formulated, "Integrated Information Theory" is not well-defined and is not a plausible "theory of consciousness." However, research on brain integration processes (greatly facilitated by white matter connections between regions of neocortex) should become enormously useful in the diagnoses and treatment of white matter disease, with implications for schizophrenia, chronic depression, bipolar disorder, obsessive-compulsive abnormalities have also been linked to developmental problems in children like autism, dyslexia, and attention-deficit hyperactivity disorder.
https://www.psychologytoday.com/us/node/1169703/preview

Reply
Algon
1/19/2023 10:02:10 am

What do you think of Scott Aaronson's old artice[1] on IIT? It seems like a knock down arguement against IIT, and was published 8 years ago.


[1]https://scottaaronson.blog/?p=1799

Reply
Jake Hanson
1/24/2023 03:09:54 pm

Hi Algon,

I think Aaronson's argument was extremely strong. He proved that you either have to accept that a symmetric lattice of logic gates can be more conscious than a human or IIT is wrong. Unfortunately, most proponents of IIT simply accepted the former. In light of this, his argument did not significantly derail IIT as a research program, or even force it to make any concessions.

That said, the discourse around Aaronson's argument contained all of the essential elements to understand the logical contradictions inherent in IIT. In particular, Tononi and others rejected Aaronson's claim that the lattice falsified IIT on the premise that it is our intuition that is wrong, rather than Phi. Yet, the only way to test Phi is against our intuition. The experimental paradigm to test Phi relies on the fact that we associate certain functional behaviors such as sleep with lower conscious states; if you don't use these external behaviors as a way to ground the theory, then there is no way to test predictions from the theory. So, IIT's experimental paradigm is grounded in external behavior, but it simultaneously rejects any connection between consciousness and external behavior.

Reply
Jake Hanson
1/24/2023 03:13:42 pm

Hi Algon,

I think Aaronson's argument was extremely strong. He proved that you either have to accept that a simple lattice of logic gates can be more conscious than a human or IIT is wrong. Unfortunately, most proponents of IIT accepted the former. In light of this, his argument did not significantly derail IIT as a research program, or even force it to make any concessions.

That said, the discourse around Aaronson's argument contained all of the essential elements to recognize the logical contradictions inherent in IIT. In particular, Tononi and others rejected Aaronson's claim that the lattice falsified IIT on the premise that it is our intuition that is wrong, rather than Phi. Yet, the only way to test Phi is against our intuition. The experimental paradigm to test Phi relies on the fact that we associate certain external behaviors such as sleep with lower conscious states; if you don't use these external behaviors as a way to ground the theory, then there is no way to justify whether or not Phi is correct other than the axioms of the theory. So, IIT's experimental paradigm relies on external behavior for validation, but when these behaviors contradict the predictions from the theory, we are told not to trust them and the axioms of IIT become the justification for Phi.

Reply
https://www.anabolickapinda14.com/urun/ link
4/27/2023 09:43:51 pm

Steroid Satın Al: https://www.anabolickapinda14.com/urun/

Reply
heets fiyatları link
5/10/2023 04:50:15 pm

Heets Çeşitlerine Göz At: https://bit.ly/heets-heets

Reply
https://steroidvip5.com/ link
8/9/2023 12:18:49 pm

Steroid Siparis Steroid Fiyatlari Steroidler Anabolizan Orjinal Steroid Steroid Satis Steroid Satın Al Steroid Siparis Et

Reply
Jessica Yurgel
10/5/2023 04:30:14 pm

Hello Jake,

Thank you so much for this awesome breakdown of the math for IIT. I'm pursuing my degree in Neuroscience and want to go into sleep and consciousness. Currently, I'm only a baby bachelor's right now, but I'll admit I was razzeldazzled by IIT. I met your wife and she told me about your article and this blog. I agree with James of Seattle that IIT is truly a great concept and just needs to revise their math. Although it might not happen for a while, but we need solid data for this and I still think it shows amazing promise for such a difficult thing to study. I wish there was more integration of fields because if we had multiple people in STEM working on this, there could be real breakthroughs.

Reply
sms onay link
2/4/2024 09:36:46 am

Sitemizi ziyaret et: SMS Onay Sepeti

Reply
link link
6/6/2024 09:47:54 am

Hemen Göz At: Sigaramiz Blog Link

Reply



Leave a Reply.

    Author

    Jake R. Hanson
    Arizona State University

    Archives

    June 2020
    May 2018
    March 2018
    November 2016
    January 2016
    December 2015
    May 2015

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.