Kereső
Bejelentkezés
Kapcsolat
xLSTM Architectures in Reinforcement Learning |
Tartalom: | http://hdl.handle.net/10890/60580 |
---|---|
Archívum: | Műegyetem Digitális Archívum |
Gyűjtemény: |
1. Tudományos közlemények, publikációk
Konferenciák gyűjteményei BME MIT PhD Minisymposium BME MIT PhD Minisymposium, 2025, 32nd |
Cím: |
xLSTM Architectures in Reinforcement Learning
|
Létrehozó: |
Antal, Mátyás
Gézsi, András
|
Dátum: |
2025-05-22T11:44:22Z
2025-05-22T11:44:22Z
2025-05-23
|
Tartalmi leírás: |
Long Short-Term Memory (LSTM) architectures have recently seen significant advancements through innovations such as exponential gating and modified memory structures, reigniting interest in their potential for modern sequence-based tasks. While xLSTM models have demonstrated strong performance in language modeling, their suitability for reinforcement learning (RL) tasks has yet to be fully explored. In this work, we investigate the application of xLSTM in RL environments, focusing on classic control tasks tasks that are commonly employed as benchmarks. This comparison provides a starting point for understanding the differences between xLSTM and LSTM in the context of reinforcement learning.
|
Nyelv: |
angol
|
Típus: |
könyvfejezet
|
Formátum: |
application/pdf
|
Azonosító: |