With AI demand broadening, CIO continues to believe mid-cycle segments like software and internet offer the best risk-reward within global tech. (UBS)

The order calls on artificial intelligence (AI) developers to share model training and safety testing results before public releases, take measures to prevent discrimination, while strengthening data privacy and security practices. It also orders Federal agencies to establish a common framework and process governing the adoption and procurement of commercial AI systems.


The AI executive order comes at a time when US large cap tech shares are already under pressure from elevated government bond yields and geopolitical risks, and lackluster earnings and guidance. The move followed steps taken earlier this month by the US government to close loopholes and tighten China’s access to advanced AI chip and semiconductor exports, which impacted semiconductor stocks.


But while overly onerous government regulation can be a drag on industry, we don't believe the latest moves will stifle AI innovation.


Standardized rules and safety systems can help to accelerate commercial adoption for disruptive tech. Standardized processes around government and contractor AI adoption should speed up the use of AI copilot and generative tools in the workplace, helping to raise direct AI revenue growth. New chief AI office positions, and an easier immigration path for AI experts, should offer another tailwind. On regulation, we think that establishing consensus rules of the road now, while adoption and capabilities are still at an early stage, has its own merits. The alternative, where very rapid growth is suddenly cut down by heavy regulation (examples include cryptocurrencies or China's ecommerce markets) can lead to potentially lasting damage.


Investors need to be ready for a multipolar AI world. Under the executive order, the US State and Commerce departments have been ordered to establish new standards for international AI partnerships. It also requires AI companies to explain their protective measures to combat espionage, cyberattacks, and digital subversion risks. China, facing US-led curbs to advanced AI chips access, is establishing its own AI ecosystem, regulations, and export controls. It is not yet clear how other countries may approach it, but the UK AI safety summit this week may offer an early barometer on progress toward a regulatory consensus.


There are limits to how the White House can regulate AI. The Biden directive is more carrot than stick, relying on the commercial appeal of government spending and contracts in absence of a more meaningful enforcement mechanism. By contrast, more stringent AI chip export controls and sanctions earlier this month weighed heavily on affected stocks. Real controls would require Congress to pass a comprehensive AI bill, which looks unlikely given widely diverging partisan views.


So we wouldn’t take this step up in AI regulatory scrutiny as a negative for the sector. Given recent selling pressure on the sector, we think investors considering adding exposure can look to beaten down AI beneficiaries, such as global semiconductor stocks. Recent tech results confirmed robust AI infrastructure spending ahead, and support our expectation for a sharp recovery in revenue next year. With AI demand broadening, we continue to believe mid-cycle segments like software and internet offer the best risk-reward within global tech. We anticipate new AI-enhanced productivity tools, such as co-pilots or generative design tools, will drive adoption in the coming quarters and boost direct AI revenue.


Read the original report : Will new US directive undermine AI innovation?, 31 October 2023.