Crowd

AI from three different perspectives - AFRY X AI Summit Malmö 2022

November 17th 2022, AFRY X hosted an AI event in Malmö. We asked Hampus Londögård, Team Lead Analytics at AFRY, to summarize the day with his reflections and takeaways.

Per Kristan Egseth, EVP & Head of AFRY X, opened the summit by sharing AFRY's journey from being a civil engineering company to becoming a powerful IT company in addition to traditional civil engineering. AFRY has worked with IoT since before it was called IoT, it worked on AI before AI was a buzzword and has all the expertise one can imagine in Cyber Security, UX, Backend, Mobile & Frontend. Further, because AFRY is not only an IT company there is a unique strength to create interdisciplinary teams with excellence. AFRY builds process industries, roads and much more. This knowledge combined with AI is a road to successful AI projects, and AFRY X is ready to help other companies form/continue their digital journey. 

The AI Summit centered around implementing successful AI rather than implementing AI. The keynote speakers each shared their own perspective.

Following the presentations, the panel discussion focused on the combination of all topics presented. Some key topics where The AI Act by EU as a regulation and the how AI in the future will impact our day-to-day life.

Finally, as an ending note, how do you succeed among the all-too-common failure?

  1. Don't take the first hypothesis, do iterative improvements
  2. Have a culture that is open to test things, including fail
  3. AI as a strategy, everyone should be part of it
  4. Accept failure as part of learning

Read on about what our three keynote speakers had to say.

Our Keynote Speakers

Pontus Wärnestål Arrow pointing right
Pontus Wärnestål

Pontus Wärnestål "What is AI used for?"

Pontus greatly focused on the design phase of AI and how to include the users.  

What differentiates this from a classical agile approach? A design sprint builds mockups which are not real Proof-of-Concepts (PoC) but simply emulates the expected behavior. This way you get feedback based on the finished product, while still developing it.

Even more important is why we should think about human design. Pontus suggests that we should not see AI as AI versus Human like we have done historically, e.g. chess-bots, but rather Human with AI. It is a tool to enhance humans, not automate them away.  

Further Pontus describes that the world is not full of tame problems like chess, i.e. rules in a box, but rather it is full of wicked problems. A wicked problem is when there is no real “stop” or finish, there is no true solution, and the problem is ever-changing. A wicked problem requires interdisciplinary teams.

Finally, Pontus shares the latest findings from UN where it is shown that Data & Analysis could light the path forward to save the environment, which is empowering.

To top everything off Pontus shared great examples from healthcare how they did a human-centered design which improved all metrics, without any drawbacks, compared to automation which had a few drawbacks.

Takeaways

To have a successful AI project you need to include humans, think deeply about the design of your product, and finally have a tight feedback-loop which enables new insights.

Reflection

We further believe that we need to think upstream, why are we having this problem to begin with? And we need to somehow build metrics which actually covers what matters to the end-user. Interactive and iterative analysis is key.

 

If you want to see the whole speech, please watch the video below:
Anna Petersson Arrow pointing right
Anna

Anna Petersson, "Business Perspective"

Anna dived straight into the business perspective and how we can focus on People, Planet & Profit at once.

It all started with the grim reality, Sweden is #19 AI in the world, mainly because our government has no real plan regarding AI.

But that doesn’t stop us from building what Anna prefers to say, Augmented Intelligence.

Based on how Anna have seen AI implemented in her region, Halland, it impacts all industries from Healthcare to Farming – it is really wide.

Anna further believes that we deeply need to emphasize trust to the system and build it somehow into the system itself. A few ways this could be done:

 

  1. Build great feedback-loops where the user can impact the AI’s predictions and override easy.
  2. Give the AI a name, in the farming example they named all Weed Detection robots to make them more “familiar”
  3. Make it a fun challenge, Anna presented a great example of a municipality that introduced AI to help with administration and to do this they updated the administrators roles into “AI Trainers”
  4. Interdisciplinary teams are a must and creates new perspectives
  5. Let human & machine work/learn together
  6. AI needs to work for people, otherwise it doesn’t work at all

 

Takeaways

To have a successful AI project you need to make it work for humans, keeping a tight transparent feedback-loop and show the value with or without an instant ROI.

Reflection

Anna hits an important point in building trust to the AI, even if we all understand the gains, we somehow need to make the solution approachable and trustful. Human interaction is key to succeed.

 

If you want to see the whole speech, please watch the video below:
Mathias Lindbro Arrow pointing right
Mathias

Mathias Lindbro, “AI Leadership”

Mathias presented how you as a leader can introduce AI in your organization.

The first step is to be nimble & agile. It is a leaders’ responsibility to take leads/insight to action, which is what we need to do with AI today.

Mathias Lindbro - NEXTEVO, ended the keynotes by presenting how a leader can use AI and why becoming a nimble organization matter.

 

So, what is a Nimble Organization?

  • Distributed Leadership
  • Fluid Teams - They are fluidly existing and non-existing for the time they are required
  • Creative Freedom & Trust
  • Organizational Game Boards-To build innovation
  • Collective Intelligence
  • Distributed Risk Mitigation
  • Data Driven Decision-making

Mathias feels the most important parts except being nimble is:

  1. Explainable AI (XAI) - Can be achieved by using Golden Datasets
  2. Fairness - Data is biased, it is something we must accept
  3. Accountability - There needs to be someone accountable on the AI’s actions to make sure it actually does what it is expected to do and we should not be able to say “that is just what it does”

And finally, package it all.

Takeaways

To quickly adapt you need to be a nimble organization, which simplified is a very agile organization, and to make sure that AI is both explainable, predictable and finally have someone accountable.

Reflections

What Mathias try to push, is great. Accountability is a way to make AI more approachable and means that the implementers will care more about it actually working correctly according to what a user expects rather than a random metric.

Further being agile has been shown again and again to be key moving forward in a fast world. Distributing risks, knowledge and leadership is key to fast decision-making, backing it by data makes it even better.

 

If you want to see the whole speech, please watch the video below:

Panel Discussion

The panel discussions summarizes what all three keynote speakers presented with some interesting questions. 

AI Project Success

The panel believes that to succeed with AI it is required that you use a Design Approach which iteratively improves itself, you also need to have accountability and an interdisciplinary team.

Some of the issues with AI is that it is hard to have a Return-On-Investment (ROI) from the get-go, it is something that comes later which can sometimes make it hard to pitch.  

Even with the high failure rate (~ 80% of AI-projects fails according to studies) and difficulties pitching without a clear ROI it is important to know that studies also say that 90-95% of industries will be affected by AI, including enormous once like Healthcare, Sustainability and Military.

Accountability, Regulation & More

Viewing AI from this lens it is easy to figure, we need to somehow manage this beast on a higher level and that is something which is actively worked upon, EU AI-Act is a proposed European law on AI. It is one of the first laws on AI by any major regulator. The panel believes this to be key moving forward, we need to have responsible AI and data collection.  

One way to have responsible AI is to introduce accountability of the models, another is to keep users in a tight-feedback loop and building human enhancing AI. You need a great time, mockups and understanding of how the model will be used.  

To build a fair model we need to be aware of bias in data and work with this somehow, there does exist tools like Golden Datasets or Fairlearn.

Panel discussion

Future

In the future we must see more open platforms with data. Data is a big problem, especially high-quality data, and making it publicly available either freely or by purchase is a key step moving forward.

It is also required that we build interdisciplinary teams to build solutions which work and improves the user’s life.  

We need a better trust in AI, which can be built in multiple ways. One is XAI, currently we cannot really explain an AI’s decision much better than if we would explain a decision based on how neurons fired in our brain.

We need to keep user’s part of the whole process, especially during the design phase, to capture the real needs.

Building Trust 

Building trust can be done by having a transparent AI with human in the loop that can override model and finally keep it all in a tight feedback-loop.

Dick Max-Hansen moderating an intersting panel disussion with keynote speakers

Reflection

To build innovation, like a nimble organization is built to do, we need to focus on limiting WIP/time spent on task and open up time to actually innovate. Innovation is not something that is built on-demand, nor is it possible if you are under constant pressure.

There needs to be free time where one has time to reflect and think about things, rather than solving them.

We need to think about feedback loops, with the wrong metrics we can end up doing the opposite of what we wish to achieve. It is incredibly easy to end up here. To take this one step further, we need to think upstream. AI today focuses a lot on solving the task at hands, but we should rather reimagine the problem and how we can solve it upstream.

Further to successfully deploy an AI project there needs to be a lot of focus on how will the AI-model be used? E.g., if it is an image recognition model, how will it be used? Can it have bias? Does it work in all expected conditions? Will the camera perhaps possibly change? Can we detect if the type of data shown to the model changes?

The deployment should be a gradual, starting small and increasing with feedback. One example could be that rather than predicting labels we will propose labels to a user, with time we start predicting labels from the get-go based on the feedback from the first trial. And moving up.  

To conclude. A very exciting day and we look forward to further AI projects and helping the industry move forward in their digital journeys.

 

Watch the entire panel discussion below
AFRY X

AFRY X Digital Services

Accelerate your digital transition