new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Feb 20

MSWEP V3: Machine Learning-Powered Global Precipitation Estimates at 0.1^circ Hourly Resolution (1979-Present)

We introduce Version 3 (V3) of the gridded near real-time Multi-Source Weighted-Ensemble Precipitation (MSWEP) product -- the first fully global, historical machine learning powered precipitation (P) dataset, developed to meet the growing demand for timely and accurate P estimates amid escalating climate challenges. MSWEP V3 provides hourly data at 0.1^circ resolution from 1979 to the present, continuously updated with a latency of approximately two hours. Development follows a two-stage process. First, baseline P fields are generated using machine learning model stacks that integrate satellite- and (re)analysis-based P and air-temperature products, along with static variables. The models are trained using hourly and daily observations from 15,959 P gauges worldwide. Second, these baseline P fields are corrected using daily and monthly gauge observations from 57,666 and 86,000 stations globally. To assess MSWEP V3's baseline performance, we evaluated 19 (quasi-) global gridded P products -- including both uncorrected and gauge-based products -- using observations from an independent set of 15,958 gauges excluded from the first training stage. The MSWEP V3 baseline achieved a median daily Kling-Gupta Efficiency (KGE) of 0.69, outperforming all evaluated products. Other uncorrected products achieved median daily KGE values of 0.61 (ERA5), 0.46 (IMERG-L V7), 0.38 (GSMaP V8), and 0.31 (CHIRP). Using leave-one-out cross-validation, the daily gauge correction was found to improve the median daily correlation by 0.09, constrained by the already strong baseline performance. We anticipate that MSWEP V3 -- accessible at www.gloh2o.org/mswep -- will enable more reliable monitoring, forecasting, and management of water-related risks in a variable and changing climate.

  • 15 authors
·
Feb 1

SeaLLMs 3: Open Foundation and Chat Multilingual Large Language Models for Southeast Asian Languages

Large Language Models (LLMs) have shown remarkable abilities across various tasks, yet their development has predominantly centered on high-resource languages like English and Chinese, leaving low-resource languages underserved. To address this disparity, we present SeaLLMs 3, the latest iteration of the SeaLLMs model family, tailored for Southeast Asian languages. This region, characterized by its rich linguistic diversity, has lacked adequate language technology support. SeaLLMs 3 aims to bridge this gap by covering a comprehensive range of languages spoken in this region, including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese. Leveraging efficient language enhancement techniques and a specially constructed instruction tuning dataset, SeaLLMs 3 significantly reduces training costs while maintaining high performance and versatility. Our model excels in tasks such as world knowledge, mathematical reasoning, translation, and instruction following, achieving state-of-the-art performance among similarly sized models. Additionally, we prioritized safety and reliability by addressing both general and culture-specific considerations and incorporated mechanisms to reduce hallucinations. This work underscores the importance of inclusive AI, showing that advanced LLM capabilities can benefit underserved linguistic and cultural communities.

  • 12 authors
·
Jul 28, 2024 6