Login
Register
Reset Password
Publish & Distribute
Publishing Solutions
Distribution Solutions
Subjects
Architecture and Design
Arts
Business and Economics
Chemistry
Classical and Ancient Near Eastern Studies
Computer Sciences
Cultural Studies
Engineering
General Interest
Geosciences
History
Industrial Chemistry
Jewish Studies
Law
Library and Information Science, Book Studies
Life Sciences
Linguistics and Semiotics
Literary Studies
Materials Sciences
Mathematics
Medicine
Music
Pharmacy
Philosophy
Physics
Social Sciences
Sports and Recreation
Theology and Religion
Publications
Journals
Books
Proceedings
Publishers
Blog
Contact
Search
EUR
USD
GBP
English
English
Deutsch
Polski
Español
Français
Italiano
Cart
Home
Journals
Journal of Artificial Intelligence and Soft Computing Research
Volume 11 (2021): Issue 1 (January 2021)
Open Access
An Optimized Parallel Implementation of Non-Iteratively Trained Recurrent Neural Networks
Julia El Zini
Julia El Zini
,
Yara Rizk
Yara Rizk
and
Mariette Awad
Mariette Awad
| Dec 03, 2020
Journal of Artificial Intelligence and Soft Computing Research
Volume 11 (2021): Issue 1 (January 2021)
About this article
Previous Article
Next Article
Abstract
References
Authors
Articles in this Issue
Preview
PDF
Cite
Share
Published Online:
Dec 03, 2020
Page range:
33 - 50
Received:
May 07, 2020
Accepted:
Sep 14, 2020
DOI:
https://doi.org/10.2478/jaiscr-2021-0003
Keywords
GPU implementation
,
parallelization
,
Recurrent Neural Network (RNN)
,
Long-short Term Memory (LSTM)
,
Gated Recurrent Unit (GRU)
,
Extreme Learning Machines (ELM)
,
non-iterative training
© 2021 Julia El Zini et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Julia El Zini
Department of Electrical and Computer Engineering, American University of Beirut
Yara Rizk
Department of Electrical and Computer Engineering, American University of Beirut
Mariette Awad
Department of Electrical and Computer Engineering, American University of Beirut