AI should be regulated now posted on 05 February 2024

I lack context and technical expertise to have a strong opinion here but I think AI should be regulated now. I don’t know exactly in what shape or form, but here are some (hopefully interesting) thoughts based on my background.

Let’s start with some common ground. We need some regulations: it is unreasonable for someone to use AI to impersonate someone else. From there the question is more what, how and when we should regulate it.

The usual arguments I hear to not regulate AI now are:

  • AI is a fast moving field and that we shouldn’t impair its growth such that humanity can better benefit from it. I actually disagree with this statement, regulations are also meant to steer the growth of a field. For example, regulations/changes around user data handling (e.g. GDPR or Google phasing out third-party cookies in Chrome) came a bit too late: there were and are still too many questionable/creepy tracking systems.
  • It is already too late to regulate the field because too much data has been used for training and that we cannot go back (without having to restart from scratch). I think the fact that it’s complex to figure out how to regulate AI doesn’t mean it shouldn’t, this was one of the argument against GDPR. Regulations can have transition periods and companies can adapt (though the longer we wait, the harder it might be to become compliant).

A few more interesting thoughts on my end:

  • From my time at YouTube, I learned that music rights (among others) are very complex with many quirky rules. As much as I think music rights aren’t necessarily fair today, I think AI should try to find its fair place.
  • I also do not believe “regulations” should be set in courts (e.g. see how the NY times is suing openAI/Microsoft) – these topics are so complex with so many ramifications that we should carefully consider all of them rather than focusing on a single specific context.
  • More research is needed around forgetting inputs as none of the existing solutions are quite satisfactory/practical from an industry perspective. With that being said, I’m not convinced forgetting inputs is necessarily the right way forward – statistical analysis of the inputs/outputs might be a more interesting/practical solution for most cases.
  • AI alignment is a field that would benefit from more transparency and public funds. Making AI safer shouldn’t be a competitive edge (at least not yet).

This is an incomplete thought/opinion and I’m very much interested in hearing more data points, other things to consider and different points of views.

LinkedIn post