学术笔谈

Evidence- and Model-based Urban Planning Opportuni-ties and Challenges
作者:Otthein HERZOG  (Tongji University, Shanghai, and Univer-sity of Bremen, Germany)

Urban and regional planning provide many challenges and po-tentials because of the interdisciplinary nature of many of the areas to be covered by "good" planning outcomes that eventually lead to "good" city operations. In this context, "good" means especially "good" for the people supposed to live in the city or district under planning, also in respect to the vastly different innovation cycles en-countered for cities. Some examples influencing city structures are Information and Communication Technology 5 years, automobile 15 years, heating technology 20 years, building construction 60 years, city thoroughfares 80 years, and wastewater infrastructure 100 years.

On the one hand, a city planner must start from the present needs of the people and the technologies to satisfy them, but on the other hand city structures must be planned to be built as flexible as possible in order to react to and incorporate future technologies. While the latter could be covered by predictions, e.g., by scenarios created by transdisciplinary experts or, in the shorter term, by predictions based on data of the past, the current needs of the people and the required technologies can be dealt with by evidence-based planning based on requirements acquired from citizens and all avail-able data sources in a city, such as mobility and logistics data from public transportation or vehicle frequency counts by sensors on the roads, energy and water consumption data, environmental data such as air quality indicators, Greenhouse Gas emissions, population health data, urban production data, industry clusters, and public ser-vices.

Much progress has been made during the last 20 years to gener-ate and collect the data needed for urban planning, to determine proper indicators for various urban properties [1] and to use the data, e.g., using graphical representations, statistics, and big data analytics to determine trends, interrelationships, and dependencies. In this ap-proach, the data, their representations, and the analysis results form the foundations of a specific model for the planning task at hand that resides in the mind of the respective urban planner. Correlation analysis is an especially useful tool in this context because it allows for detecting dependencies between different data variables even if causality cannot generally be deduced.

An example for this approach [2] can be found in where first city environmental indicators relevant to environmental air quality conditions of four Chinese cities were determined, such as highways, percentage of paved roads, real-time traffic data, industry clusters of different industry types, shopping centers, and public transportation facilities. Correlation analyses were then carried out that determined that, e.g., better public transport correlated to better Air Quality In-dex (AQI), and that AQI changes caused by industry clusters varied vastly throughout the day, while car emissions contributed greatly to increased AQI. Using the same data, also a city specific cost model could be defined, and moreover, could be used to train a Back Propagation Neural Network (BPNN) to provide AQI predictions for the four cities under different assumptions. Therefore, the knowledge won from the data and incorporated in the BPNN could drive the BPNN as a decision support system, where formerly, a simulation system would have been to be programmed and run.

This example outlines quite well the trade-offs of conventional programming vs. Neural Networks (NN): while conventional programming requires different layers of more and more models formalized through programming languages, arriving finally at a running (and hopefully provably correct) program, this effort is replaced in the NN case by the proper determination of indicators, the subsequent collection of appropriate data representing the indicators, and the following (hardware-wise costly) training of the NN. The burden of system implementation has been shifted in the NN case away from coding towards the selection of indicators and their related data (examples). This means that an inadequate selection of examples can lead to bias, to overspecification, or even to missing parts of the model trained into an NN. Therefore, much of the work that had traditionally been devoted to coding, must go now into the selec-tion, cleaning, and checking of the examples—indispensable steps to ensure the viability of the approach.

The same cautionary remarks apply also to Large Language Models (LLM), at least as much as the data is concerned that is sup-posed to be the foundation of the incorporated training step aiming at knowledge acquisition. What really differentiates them from "or-dinary" NNs is the fact that their training can derive knowledge even from natural language texts (and even conventional program code). This certainly opens the way of interaction with "computers" for a "natural" communication. However, as the knowledge in the LLMs is basically coded only in very long strings composed of the next most likely character, the LLMs don't have the capabilities to logically deduce knowledge. They are even able to hallucinate an-swers as natural language texts that do not relate to any facts in their training data and can be plainly wrong! But given a (in the best case verified) data and text input for their training, and restricting them to application areas with well-determined knowledge bases, LLM technology constitutes the next generation tool for many applications without the need of tedious coding work at the syntax level, just by the power of bi-directional natural language communication at the semantical level at least for the human partner.

For urban and regional planning, the LLM approach will be-and that is at least what I believe-the biggest step of this application field towards fully integrated information technology for decision making: think of feeding all relevant text books, rules, laws, etc. for a specific planning subject into an LLM: this definitely will enable more comprehensive planning cycles, and, maybe most importantly of all, will enable the urban and regional planners to get their arms around the multiple interdependencies between the many aspects of the different planning areas. Moreover, the LLM approach will en-able also the acquisition of the knowledge needed for the develop-ment of Digital Twins representing all important dynamic aspects of a city thus bringing the LLM technology also to city operations[3] . In this way, evidence-based (partial and interacting) validated models will become a solid foundation of the urban and regional plan-ning tasks as well of city operations.