TLDR:
- Ilya Sutskever launches Safe Superintelligence, focusing on safe AI research
- OpenAI co-founder aims to prioritize safety over profit, contrasting with current AI trends
A new startup, Safe Superintelligence, led by OpenAI co-founder Ilya Sutskever, is taking a stand for safe AI research. Sutskever emphasizes the importance of creating artificial general intelligence that prioritizes safety, likening it to nuclear safety rather than just trust and safety. The venture aims to stay clear of the competitive commercial AI race and focus solely on research and development.
Safe Superintelligence was formed in response to concerns about the direction of AI research at OpenAI, particularly regarding safety measures. The startup’s approach to safety-first AI is a departure from the profit-driven strategies of many AI companies today. Despite uncertainties about funding and investor interest, Sutskever remains committed to his vision and promises not to pivot from the startup’s core mission.
While some investors see merit in the noble goal of creating safe AGI, they question the feasibility of one group staying ahead of the global AI tech race. However, Sutskever’s dedication to principle and the efforts to prioritize safety in AI development deserve acknowledgment amidst the current AI landscape dominated by profit-driven motives.