In the last year, there have been a number of companies that have landed generative AI on their toolchains, combining our analysis of these companies with recent “new technology” trends in China, such as the initial rise of native apps in Yu Hongmeng. From these cases and trends, we also see some new possible directions.

Combining our three-phase framework at LLM as-Copilot, LLM as-Integrator, LLM as-Facilitator, and our internal analysis materials, I would roughly summarize them into 6 trends:

  1. from single-role assistance to end-to-end assistance.
  2. Knowledge Management for Assisted Decision Making.
  3. DevOps facilities for AI applications.
  4. online fault localization and problem solving.
  5. influx of AI-assisted UI design. 6. code translation vs.
  6. code translation and inter-system translation.

Some of this knowledge is almost as much as we’ve previously reached agreement on, so let’s tell the story in reverse.

Digitalization of R&D forced by generative AI

Before we start the new trend summary, one thing we have to mention is: the digitization of R&D. Over the past year, I’ve talked to almost 10 companies’ R&D leaders about AI-assisted R&D. In fact, the reason that prevents most companies from applying generative AI, in addition to model limitations, is the poor level of digitization of R&D.

The first problem we have to face is that standardization is not on the ground. Simply put, standardization, platform, indicator-driven four maturity levels to consider the problem, some organizations are still in the standardization of the difficult problem of landing, not to mention indicator-driven improvement. Fortunately generative AI combined with tools can improve the problem of hard to land on specification, which is a potential opportunity to bend the curve - provided there is enough gumption to push forward.

In addition to this, the second problem we have to face is: Management of Knowledge - there is a lot of unspeakable knowledge in the organization (askew, e.g. content gossip). The challenges we will encounter are:

  • Undocumented, unmanifested.
  • Lots of outdated knowledge - you don’t know which document is old.
  • A lot of non-textual knowledge - a conference whiteboard shot one day where the words are unrecognizable.

Simply put, these are part of our knowledge debt.

Code translation and inter-system translation

Scenario 1: Legacy System Migration. Generative AI features are well expressed in natural language translation and very prominent in programming languages. So, last year we also did the analysis of related features in AutoDev and built a series of related legacy system functions. In commercial products, we can also see such as IBM watsonx Code Assistant for Z such as Cobol to Java specialized tools.

And how to analyze the legacy system migration remains a complex issue. Existing tools are more often designed by people to migrate, assisted by AI.

Scenario 2: Inter-system translation. As, more and more large manufacturers start to develop Hongmeng applications, we have found in practice the advantages of generative AI in this regard. Since the UI differences of mobile systems are not that big, some of the functions can be migrated through translation. Although, we encountered a large number of generative AIs that lacked new proprietary knowledge (ArkUI, ArkTS, HarmonyOS APIs), a combination of combining Thought Chain and RAG with them could achieve more acceptable results.

The Emergence of AI-Assisted UI Design

AI generative code needs to incorporate information such as existing specifications in order to generate code that works. For back-end code development, where Spring reigns supreme, building this generative AI-friendly architecture is an easy task. However, generative AI is challenging in the front-end space as small, medium, and large organizations have their own brand guidelines, style guides, and design systems.

From the existing models, the main AI-assisted UI designs can be categorized into three types:

  1. prototype generation to assist in demand communication.
  2. UI design generation combined with low-code platforms. 3.
  3. UI code generation combined with IDE plug-ins.

Considering the complexity of front-end requirements, it is obvious that it would be easier to start from the second scenario, while scenario 3 is more suitable for novices to learn and use the framework, and developers to use new frameworks.

Online fault localization and problem solving

Online Issue Fix. Before generative AI, conventional deterministic AI has enabled a great deal of automation. Conventional application performance monitoring (APM) tools can map errors reported by online runtimes to the corresponding faulty code. ps: Combined with information about the correlation between the requirements and the code, we can accurately deduce which requirement change caused the impact. With generative AI, online problems can be converted to fixing PRs to assist you in fixing the problem, such as NewRelic, which has a similar feature.

Fault localization. Troubleshooting networks and problems becomes incredibly important in complex systems that contain a large number of subsystems, such as a single microservice. In the absence of tools, humans often lose critical information at some point, and AI can assist us in solving such problems, such as AWS’s AI-assisted network troubleshooting.

Considering I’m an expert in Dev, not Ops, I can’t read much more into this.

DevOps facilities for AI applications

There are already a large number of online applications that have introduced AI capabilities, such as the Starbucks face-swap campaign, and these types of AI applications have introduced a series of AI infrastructures. Therefore, for medium to large organizations, in addition to considering the appropriate private deployment model, you also need to build a rapid AI DevOps infrastructure to support it.

In addition to the various types of monitoring of the big models themselves, we also need the operational costs of the models themselves – especially after you call third-party APIs to build a better AiBizDevFinGitSecOps system (🐶🐶🐶🐶). Naturally, we need an AI to advise you on your AI + Finance, such as building caching mechanisms, Prompt length optimization, and so on.

Knowledge Management for Decision Making

Knowledge management used to be a headache, now it has become a full-body pain (for lack of a better word). I’m sure you readers already understand generative AI very well:

  • If you do not give him enough information, it generates the results can be accepted by luck.
  • If you give him enough information, it will always ignore some important information to make you angry.

Angry or not, when you start thinking about landing, you start assuming that when I have an architectural specification, generative AI can assist will make architectural decisions. Then, you realize that you can’t find an architectural specification that meets the requirements. Similarly, in other scenarios, there are similar problems.

PS (a twist): So, you should consider prioritizing knowledge management as well, so that when you report back with your leaders, you can reasonably dump the pot.

From single-role assistance to end-to-end assistance

In fact, most of the above is about how AI can switch from single-role assistance to end-to-end assistance, just with requirements from different scenarios.

The difficulty with end-to-end assistance is not the design of the tool or the prompt itself, but whether the processes and specifications are in place. If there are problems with the processes and specifications themselves, then we need to explore whether there are more appropriate strategies from different scenarios.

Other and AI summaries

Of course, there are AI-assisted R&D scenarios such as instant assisted problem fixing that we’ve discussed in the past.

This article looks at trends in AI-assisted R&D in 2024, with a particular emphasis on the evolution of AI technology from simply assisting a single role to end-to-end assistance. The author begins by mentioning the importance of R&D digitization in AI adoption and points out the challenges of standardization and knowledge management. He then details six major trends:

  1. From single-role assistance to end-to-end assistance: the AI technology is no longer limited to single-role assistance, but extends to all aspects of the entire R&D process.
  2. Knowledge Management for Decision Aid: the application of AI in knowledge management has become more important, but also faces the problem of incomplete information and information selection.
  3. DevOps facilities for AI applications: The introduction of AI applications requires the establishment of adaptable DevOps infrastructure to support their operation and monitoring.
  4. Online Fault Location and Problem Solving: The application of AI in online fault location and problem solving is also becoming mature, which can help quickly locate problems and provide solutions.
  5. Emergence of AI-assisted UI design: The application of AI in UI design has taken various forms, including assisting in demand communication, UI design generation on low-code platforms, and UI code generation by IDE plug-ins.
  6. Code Translation and Inter-System Translation: The application of AI in code translation and inter-system translation is gradually maturing, especially in the performance of legacy system migration and inter-system function migration.