Evaluating large language model adaptation strategies for geospatial code generation
Recent advances in Large Language Models (LLMs) offer a promising alternative by enabling code generation from natural language. However, despite progress, LLMs still struggle with spatial reasoning, structural fidelity, and robustness across diverse GIS datasets. This thesis systematically compares three LLM adaptation strategies, i.e., prompt engineering, retrieval-augmented generation (RAG), an