- signal5
- Posts
- AI learns to see
AI learns to see

You were made to live and love with your whole heart. It’s time to show up and be seen.
-Brene Brown
Daily insights to generate alpha for wealth, and for life

OpenAI learns to think with images
Just days after releasing GPT-4.1, OpenAI has launched two new models: o3 and o4-mini.
O3 is their most advanced reasoning model yet. It excels at coding, math, and science tasks. O4-mini offers similar capabilities at a lower cost.
What makes these models special?
Upload a whiteboard sketch or diagram – even a messy one – and they'll understand it. They can also modify images as part of their reasoning process.
For the first time, OpenAI's reasoning models can use all ChatGPT tools together. They can browse the web and generate images while solving complex problems. This represents a significant step toward independent AI action.
OpenAI claims this combination of advanced reasoning with full tool access delivers "significantly stronger performance" on both academic benchmarks and real-world tasks.
Alongside these models, the company is releasing Codex CLI. This new coding agent gives developers a minimal interface to connect OpenAI's models with their local code. It works with o3 and o4-mini now, with GPT-4.1 support coming soon.
This release represents a change in OpenAI's strategy. CEO Sam Altman previously said o3 wouldn't be a standalone product. He reversed course in April, explaining that this approach would ultimately make GPT-5 "much better than originally thought."
ChatGPT Plus, Pro, and Team users can access o3 and o4-mini immediately. An even more powerful o3-pro will launch for Pro subscribers in coming weeks.

Robot learning to navigate like children do
Why it matters
The combination of visual understanding with reasoning capabilities opens up possibilities for AI to work more effectively in domains where information is naturally visual rather than textual. Like a robot learning to navigate the complex spaces safely - on its’ own.
This represents a significant step toward more versatile AI systems that can interact with the world in ways that more closely match human cognitive abilities.
Example use cases range from engineers sketching rough designs on whiteboards that AI can interpret, refine, and even simulate - OR - Automated analysis of complex visual financial reports - such as the portfolio graph in the next section.

We will publish one portfolio - for every day of the week. Mon-Fri - and its’ progress from a starting value of $100K - to now - for the past 14 years.
The Monday post from every week will track the Monday portfolio. The Tuesday post will track the Tuesday portfolio - and so on - All data live -
Portfolio ID - Tuesday.
Current holdings for this portfolio - AVGO, PM, ABT
How does this portfolio work - Do 2 simple things -
First - Rebalance end of the month (Sell)
Second - Buy 3 stocks that were picked (Not an investment advice).
In some months no change needed if the stock from the previous month stays to be the pick
Portfolio ID | Gain - 1 Month | Gain - 3 Months | Gain - 1 Year | Gain - 3 Years |
---|---|---|---|---|
Tue | -6% | -8% | 5% | 73% |



"Every action you take is a vote for the type of person you wish to become."
James Clear
PS: If you would like to read more of James Clear - here is how you can do it

Generative AI is learning to spy for the US military

This microwave pop corn popper with temperature safe glass is blowing up in popularity. Do you need it? No. You might want it. Go get it