Ok-since there is an adjunction between the theories of a logic and the models of those theories,maybe there is a need to somehow meld probability theory and logic using category theory/categorical logic?Since machine learning as far as I read of artificial intelligence is much easier in probability theory,and also it would be much easier to generate random numbers based on the learned distribution,probabilities would give the creativity part of true intelligence,and logic would be used to structure what the AI agent learned.

As I see it,the end goal of this idea is an algorithm based on the syntax-semantics adjuntion that would learn (semantics->;syntax) and then generate (syntax->semantics) hypotheses about the subject,then test it out and learn based on reinforcement methods.Basically the end goal would be a general algorithm for the scientific method,which sounds crazy,but i've been thinking about this for a few years and just need to let this out in the open,since I don't yet have the appropriate knowledge or skills to work this idea out whether it's semi-feasible or not.

Also,if I'm thinking staright,then maybe we could use the curry-howard isomorphism and metaprogramming to write an AI agent that would improve itself,but this is bordering on sci-fi and this is why I'm writing this-to know wether or not I'm yet another crackpot. ]]>