Distributed Systems and Parallel Computing
No matter how powerful individual computers become, there are still reasons to harness the power of multiple computational units, often spread across large geographic areas. Sometimes this is motivated by the need to collect data from widely dispersed locations (e.g., web pages from servers, or sensors for weather or traffic). Other times it is motivated by the need to perform enormous computations that simply cannot be done by a single CPU.
From our company’s beginning, Google has had to deal with both issues in our pursuit of organizing the world’s information and making it universally accessible and useful. We continue to face many exciting distributed systems and parallel computing challenges in areas such as concurrency control, fault tolerance, algorithmic efficiency, and communication. Some of our research involves answering fundamental theoretical questions, while other researchers and engineers are engaged in the construction of systems to operate at the largest possible scale, thanks to our hybrid research model .
Recent Publications
Some of our teams.
Algorithms & optimization
Graph mining
Network infrastructure
System performance
We're always looking for more talented, passionate people.
Journal of Parallel and Distributed Computing
Subject Area and Category
- Artificial Intelligence
- Computer Networks and Communications
- Hardware and Architecture
- Theoretical Computer Science
Academic Press Inc.
Publication type
07437315, 10960848
Information
How to publish in this journal
The set of journals have been ranked according to their SJR and divided into four equal groups, four quartiles. Q1 (green) comprises the quarter of the journals with the highest values, Q2 (yellow) the second highest values, Q3 (orange) the third highest values and Q4 (red) the lowest values.
Category | Year | Quartile |
---|---|---|
Artificial Intelligence | 1999 | Q2 |
Artificial Intelligence | 2000 | Q2 |
Artificial Intelligence | 2001 | Q3 |
Artificial Intelligence | 2002 | Q3 |
Artificial Intelligence | 2003 | Q3 |
Artificial Intelligence | 2004 | Q2 |
Artificial Intelligence | 2005 | Q2 |
Artificial Intelligence | 2006 | Q2 |
Artificial Intelligence | 2007 | Q2 |
Artificial Intelligence | 2008 | Q2 |
Artificial Intelligence | 2009 | Q2 |
Artificial Intelligence | 2010 | Q2 |
Artificial Intelligence | 2011 | Q2 |
Artificial Intelligence | 2012 | Q3 |
Artificial Intelligence | 2013 | Q3 |
Artificial Intelligence | 2014 | Q2 |
Artificial Intelligence | 2015 | Q2 |
Artificial Intelligence | 2016 | Q2 |
Artificial Intelligence | 2017 | Q2 |
Artificial Intelligence | 2018 | Q2 |
Artificial Intelligence | 2019 | Q2 |
Artificial Intelligence | 2020 | Q2 |
Artificial Intelligence | 2021 | Q1 |
Artificial Intelligence | 2022 | Q2 |
Artificial Intelligence | 2023 | Q2 |
Computer Networks and Communications | 1999 | Q2 |
Computer Networks and Communications | 2000 | Q2 |
Computer Networks and Communications | 2001 | Q2 |
Computer Networks and Communications | 2002 | Q3 |
Computer Networks and Communications | 2003 | Q2 |
Computer Networks and Communications | 2004 | Q2 |
Computer Networks and Communications | 2005 | Q2 |
Computer Networks and Communications | 2006 | Q2 |
Computer Networks and Communications | 2007 | Q2 |
Computer Networks and Communications | 2008 | Q2 |
Computer Networks and Communications | 2009 | Q2 |
Computer Networks and Communications | 2010 | Q2 |
Computer Networks and Communications | 2011 | Q2 |
Computer Networks and Communications | 2012 | Q2 |
Computer Networks and Communications | 2013 | Q2 |
Computer Networks and Communications | 2014 | Q2 |
Computer Networks and Communications | 2015 | Q1 |
Computer Networks and Communications | 2016 | Q1 |
Computer Networks and Communications | 2017 | Q2 |
Computer Networks and Communications | 2018 | Q2 |
Computer Networks and Communications | 2019 | Q2 |
Computer Networks and Communications | 2020 | Q1 |
Computer Networks and Communications | 2021 | Q1 |
Computer Networks and Communications | 2022 | Q1 |
Computer Networks and Communications | 2023 | Q1 |
Hardware and Architecture | 1999 | Q2 |
Hardware and Architecture | 2000 | Q2 |
Hardware and Architecture | 2001 | Q2 |
Hardware and Architecture | 2002 | Q3 |
Hardware and Architecture | 2003 | Q2 |
Hardware and Architecture | 2004 | Q2 |
Hardware and Architecture | 2005 | Q2 |
Hardware and Architecture | 2006 | Q2 |
Hardware and Architecture | 2007 | Q2 |
Hardware and Architecture | 2008 | Q2 |
Hardware and Architecture | 2009 | Q2 |
Hardware and Architecture | 2010 | Q2 |
Hardware and Architecture | 2011 | Q2 |
Hardware and Architecture | 2012 | Q2 |
Hardware and Architecture | 2013 | Q2 |
Hardware and Architecture | 2014 | Q2 |
Hardware and Architecture | 2015 | Q1 |
Hardware and Architecture | 2016 | Q1 |
Hardware and Architecture | 2017 | Q1 |
Hardware and Architecture | 2018 | Q2 |
Hardware and Architecture | 2019 | Q2 |
Hardware and Architecture | 2020 | Q1 |
Hardware and Architecture | 2021 | Q1 |
Hardware and Architecture | 2022 | Q1 |
Hardware and Architecture | 2023 | Q1 |
Software | 1999 | Q2 |
Software | 2000 | Q2 |
Software | 2001 | Q2 |
Software | 2002 | Q3 |
Software | 2003 | Q2 |
Software | 2004 | Q2 |
Software | 2005 | Q2 |
Software | 2006 | Q2 |
Software | 2007 | Q3 |
Software | 2008 | Q2 |
Software | 2009 | Q2 |
Software | 2010 | Q2 |
Software | 2011 | Q2 |
Software | 2012 | Q2 |
Software | 2013 | Q2 |
Software | 2014 | Q2 |
Software | 2015 | Q2 |
Software | 2016 | Q2 |
Software | 2017 | Q2 |
Software | 2018 | Q2 |
Software | 2019 | Q2 |
Software | 2020 | Q2 |
Software | 2021 | Q1 |
Software | 2022 | Q1 |
Software | 2023 | Q1 |
Theoretical Computer Science | 1999 | Q2 |
Theoretical Computer Science | 2000 | Q2 |
Theoretical Computer Science | 2001 | Q2 |
Theoretical Computer Science | 2002 | Q4 |
Theoretical Computer Science | 2003 | Q3 |
Theoretical Computer Science | 2004 | Q2 |
Theoretical Computer Science | 2005 | Q3 |
Theoretical Computer Science | 2006 | Q2 |
Theoretical Computer Science | 2007 | Q3 |
Theoretical Computer Science | 2008 | Q2 |
Theoretical Computer Science | 2009 | Q3 |
Theoretical Computer Science | 2010 | Q3 |
Theoretical Computer Science | 2011 | Q3 |
Theoretical Computer Science | 2012 | Q3 |
Theoretical Computer Science | 2013 | Q3 |
Theoretical Computer Science | 2014 | Q3 |
Theoretical Computer Science | 2015 | Q2 |
Theoretical Computer Science | 2016 | Q2 |
Theoretical Computer Science | 2017 | Q2 |
Theoretical Computer Science | 2018 | Q3 |
Theoretical Computer Science | 2019 | Q3 |
Theoretical Computer Science | 2020 | Q2 |
Theoretical Computer Science | 2021 | Q1 |
Theoretical Computer Science | 2022 | Q1 |
Theoretical Computer Science | 2023 | Q1 |
The SJR is a size-independent prestige indicator that ranks journals by their 'average prestige per article'. It is based on the idea that 'all citations are not created equal'. SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from It measures the scientific influence of the average article in a journal, it expresses how central to the global scientific discussion an average article of the journal is.
Year | SJR |
---|---|
1999 | 0.358 |
2000 | 0.436 |
2001 | 0.444 |
2002 | 0.312 |
2003 | 0.459 |
2004 | 0.499 |
2005 | 0.489 |
2006 | 0.490 |
2007 | 0.442 |
2008 | 0.586 |
2009 | 0.489 |
2010 | 0.509 |
2011 | 0.485 |
2012 | 0.397 |
2013 | 0.437 |
2014 | 0.548 |
2015 | 0.614 |
2016 | 0.597 |
2017 | 0.502 |
2018 | 0.417 |
2019 | 0.525 |
2020 | 0.638 |
2021 | 1.289 |
2022 | 1.158 |
2023 | 1.187 |
Evolution of the number of published documents. All types of documents are considered, including citable and non citable documents.
Year | Documents |
---|---|
1999 | 71 |
2000 | 67 |
2001 | 91 |
2002 | 84 |
2003 | 102 |
2004 | 103 |
2005 | 124 |
2006 | 124 |
2007 | 92 |
2008 | 119 |
2009 | 89 |
2010 | 104 |
2011 | 135 |
2012 | 145 |
2013 | 143 |
2014 | 123 |
2015 | 97 |
2016 | 86 |
2017 | 171 |
2018 | 207 |
2019 | 214 |
2020 | 170 |
2021 | 175 |
2022 | 155 |
2023 | 125 |
This indicator counts the number of citations received by documents from a journal and divides them by the total number of documents published in that journal. The chart shows the evolution of the average number of times documents published in a journal in the past two, three and four years have been cited in the current year. The two years line is equivalent to journal impact factor ™ (Thomson Reuters) metric.
Cites per document | Year | Value |
---|---|---|
Cites / Doc. (4 years) | 1999 | 0.890 |
Cites / Doc. (4 years) | 2000 | 1.014 |
Cites / Doc. (4 years) | 2001 | 0.997 |
Cites / Doc. (4 years) | 2002 | 0.854 |
Cites / Doc. (4 years) | 2003 | 1.058 |
Cites / Doc. (4 years) | 2004 | 1.535 |
Cites / Doc. (4 years) | 2005 | 1.905 |
Cites / Doc. (4 years) | 2006 | 1.685 |
Cites / Doc. (4 years) | 2007 | 1.704 |
Cites / Doc. (4 years) | 2008 | 1.603 |
Cites / Doc. (4 years) | 2009 | 1.845 |
Cites / Doc. (4 years) | 2010 | 2.179 |
Cites / Doc. (4 years) | 2011 | 2.495 |
Cites / Doc. (4 years) | 2012 | 2.244 |
Cites / Doc. (4 years) | 2013 | 2.228 |
Cites / Doc. (4 years) | 2014 | 2.624 |
Cites / Doc. (4 years) | 2015 | 2.484 |
Cites / Doc. (4 years) | 2016 | 2.868 |
Cites / Doc. (4 years) | 2017 | 3.131 |
Cites / Doc. (4 years) | 2018 | 2.994 |
Cites / Doc. (4 years) | 2019 | 3.198 |
Cites / Doc. (4 years) | 2020 | 3.729 |
Cites / Doc. (4 years) | 2021 | 4.349 |
Cites / Doc. (4 years) | 2022 | 4.872 |
Cites / Doc. (4 years) | 2023 | 5.183 |
Cites / Doc. (3 years) | 1999 | 0.890 |
Cites / Doc. (3 years) | 2000 | 1.116 |
Cites / Doc. (3 years) | 2001 | 0.825 |
Cites / Doc. (3 years) | 2002 | 0.777 |
Cites / Doc. (3 years) | 2003 | 1.066 |
Cites / Doc. (3 years) | 2004 | 1.552 |
Cites / Doc. (3 years) | 2005 | 1.869 |
Cites / Doc. (3 years) | 2006 | 1.775 |
Cites / Doc. (3 years) | 2007 | 1.390 |
Cites / Doc. (3 years) | 2008 | 1.591 |
Cites / Doc. (3 years) | 2009 | 1.910 |
Cites / Doc. (3 years) | 2010 | 2.397 |
Cites / Doc. (3 years) | 2011 | 2.429 |
Cites / Doc. (3 years) | 2012 | 2.079 |
Cites / Doc. (3 years) | 2013 | 2.313 |
Cites / Doc. (3 years) | 2014 | 2.461 |
Cites / Doc. (3 years) | 2015 | 2.455 |
Cites / Doc. (3 years) | 2016 | 2.981 |
Cites / Doc. (3 years) | 2017 | 3.271 |
Cites / Doc. (3 years) | 2018 | 2.669 |
Cites / Doc. (3 years) | 2019 | 3.134 |
Cites / Doc. (3 years) | 2020 | 3.939 |
Cites / Doc. (3 years) | 2021 | 4.997 |
Cites / Doc. (3 years) | 2022 | 5.211 |
Cites / Doc. (3 years) | 2023 | 5.174 |
Cites / Doc. (2 years) | 1999 | 0.887 |
Cites / Doc. (2 years) | 2000 | 0.907 |
Cites / Doc. (2 years) | 2001 | 0.746 |
Cites / Doc. (2 years) | 2002 | 0.753 |
Cites / Doc. (2 years) | 2003 | 1.074 |
Cites / Doc. (2 years) | 2004 | 1.575 |
Cites / Doc. (2 years) | 2005 | 1.971 |
Cites / Doc. (2 years) | 2006 | 1.256 |
Cites / Doc. (2 years) | 2007 | 1.222 |
Cites / Doc. (2 years) | 2008 | 1.713 |
Cites / Doc. (2 years) | 2009 | 2.085 |
Cites / Doc. (2 years) | 2010 | 2.264 |
Cites / Doc. (2 years) | 2011 | 1.948 |
Cites / Doc. (2 years) | 2012 | 2.113 |
Cites / Doc. (2 years) | 2013 | 2.079 |
Cites / Doc. (2 years) | 2014 | 2.351 |
Cites / Doc. (2 years) | 2015 | 2.466 |
Cites / Doc. (2 years) | 2016 | 2.882 |
Cites / Doc. (2 years) | 2017 | 2.863 |
Cites / Doc. (2 years) | 2018 | 2.261 |
Cites / Doc. (2 years) | 2019 | 3.204 |
Cites / Doc. (2 years) | 2020 | 4.468 |
Cites / Doc. (2 years) | 2021 | 5.401 |
Cites / Doc. (2 years) | 2022 | 4.965 |
Cites / Doc. (2 years) | 2023 | 4.736 |
Evolution of the total number of citations and journal's self-citations received by a journal's published documents during the three previous years. Journal Self-citation is defined as the number of citation from a journal citing article to articles published by the same journal.
Cites | Year | Value |
---|---|---|
Self Cites | 1999 | 17 |
Self Cites | 2000 | 13 |
Self Cites | 2001 | 9 |
Self Cites | 2002 | 4 |
Self Cites | 2003 | 11 |
Self Cites | 2004 | 13 |
Self Cites | 2005 | 20 |
Self Cites | 2006 | 18 |
Self Cites | 2007 | 10 |
Self Cites | 2008 | 30 |
Self Cites | 2009 | 14 |
Self Cites | 2010 | 18 |
Self Cites | 2011 | 19 |
Self Cites | 2012 | 23 |
Self Cites | 2013 | 41 |
Self Cites | 2014 | 35 |
Self Cites | 2015 | 23 |
Self Cites | 2016 | 24 |
Self Cites | 2017 | 35 |
Self Cites | 2018 | 42 |
Self Cites | 2019 | 74 |
Self Cites | 2020 | 64 |
Self Cites | 2021 | 66 |
Self Cites | 2022 | 37 |
Self Cites | 2023 | 71 |
Total Cites | 1999 | 325 |
Total Cites | 2000 | 317 |
Total Cites | 2001 | 179 |
Total Cites | 2002 | 178 |
Total Cites | 2003 | 258 |
Total Cites | 2004 | 430 |
Total Cites | 2005 | 540 |
Total Cites | 2006 | 584 |
Total Cites | 2007 | 488 |
Total Cites | 2008 | 541 |
Total Cites | 2009 | 640 |
Total Cites | 2010 | 719 |
Total Cites | 2011 | 758 |
Total Cites | 2012 | 682 |
Total Cites | 2013 | 888 |
Total Cites | 2014 | 1041 |
Total Cites | 2015 | 1009 |
Total Cites | 2016 | 1082 |
Total Cites | 2017 | 1001 |
Total Cites | 2018 | 945 |
Total Cites | 2019 | 1454 |
Total Cites | 2020 | 2332 |
Total Cites | 2021 | 2953 |
Total Cites | 2022 | 2913 |
Total Cites | 2023 | 2587 |
Evolution of the number of total citation per document and external citation per document (i.e. journal self-citations removed) received by a journal's published documents during the three previous years. External citations are calculated by subtracting the number of self-citations from the total number of citations received by the journal’s documents.
Cites | Year | Value |
---|---|---|
External Cites per document | 1999 | 0.844 |
External Cites per document | 2000 | 1.070 |
External Cites per document | 2001 | 0.783 |
External Cites per document | 2002 | 0.760 |
External Cites per document | 2003 | 1.021 |
External Cites per document | 2004 | 1.505 |
External Cites per document | 2005 | 1.799 |
External Cites per document | 2006 | 1.720 |
External Cites per document | 2007 | 1.362 |
External Cites per document | 2008 | 1.503 |
External Cites per document | 2009 | 1.869 |
External Cites per document | 2010 | 2.337 |
External Cites per document | 2011 | 2.369 |
External Cites per document | 2012 | 2.009 |
External Cites per document | 2013 | 2.206 |
External Cites per document | 2014 | 2.378 |
External Cites per document | 2015 | 2.399 |
External Cites per document | 2016 | 2.915 |
External Cites per document | 2017 | 3.157 |
External Cites per document | 2018 | 2.551 |
External Cites per document | 2019 | 2.974 |
External Cites per document | 2020 | 3.831 |
External Cites per document | 2021 | 4.885 |
External Cites per document | 2022 | 5.145 |
External Cites per document | 2023 | 5.032 |
Cites per document | 1999 | 0.890 |
Cites per document | 2000 | 1.116 |
Cites per document | 2001 | 0.825 |
Cites per document | 2002 | 0.777 |
Cites per document | 2003 | 1.066 |
Cites per document | 2004 | 1.552 |
Cites per document | 2005 | 1.869 |
Cites per document | 2006 | 1.775 |
Cites per document | 2007 | 1.390 |
Cites per document | 2008 | 1.591 |
Cites per document | 2009 | 1.910 |
Cites per document | 2010 | 2.397 |
Cites per document | 2011 | 2.429 |
Cites per document | 2012 | 2.079 |
Cites per document | 2013 | 2.313 |
Cites per document | 2014 | 2.461 |
Cites per document | 2015 | 2.455 |
Cites per document | 2016 | 2.981 |
Cites per document | 2017 | 3.271 |
Cites per document | 2018 | 2.669 |
Cites per document | 2019 | 3.134 |
Cites per document | 2020 | 3.939 |
Cites per document | 2021 | 4.997 |
Cites per document | 2022 | 5.211 |
Cites per document | 2023 | 5.174 |
International Collaboration accounts for the articles that have been produced by researchers from several countries. The chart shows the ratio of a journal's documents signed by researchers from more than one country; that is including more than one country address.
Year | International Collaboration |
---|---|
1999 | 23.94 |
2000 | 26.87 |
2001 | 18.68 |
2002 | 27.38 |
2003 | 26.47 |
2004 | 25.24 |
2005 | 26.61 |
2006 | 28.23 |
2007 | 27.17 |
2008 | 25.21 |
2009 | 25.84 |
2010 | 32.69 |
2011 | 26.67 |
2012 | 29.66 |
2013 | 30.77 |
2014 | 30.08 |
2015 | 37.11 |
2016 | 29.07 |
2017 | 32.75 |
2018 | 43.96 |
2019 | 37.85 |
2020 | 48.24 |
2021 | 35.43 |
2022 | 38.06 |
2023 | 38.40 |
Not every article in a journal is considered primary research and therefore "citable", this chart shows the ratio of a journal's articles including substantial research (research articles, conference papers and reviews) in three year windows vs. those documents other than research articles, reviews and conference papers.
Documents | Year | Value |
---|---|---|
Non-citable documents | 1999 | 2 |
Non-citable documents | 2000 | 1 |
Non-citable documents | 2001 | 0 |
Non-citable documents | 2002 | 5 |
Non-citable documents | 2003 | 8 |
Non-citable documents | 2004 | 14 |
Non-citable documents | 2005 | 9 |
Non-citable documents | 2006 | 8 |
Non-citable documents | 2007 | 8 |
Non-citable documents | 2008 | 10 |
Non-citable documents | 2009 | 11 |
Non-citable documents | 2010 | 7 |
Non-citable documents | 2011 | 6 |
Non-citable documents | 2012 | 7 |
Non-citable documents | 2013 | 7 |
Non-citable documents | 2014 | 13 |
Non-citable documents | 2015 | 12 |
Non-citable documents | 2016 | 13 |
Non-citable documents | 2017 | 8 |
Non-citable documents | 2018 | 11 |
Non-citable documents | 2019 | 15 |
Non-citable documents | 2020 | 24 |
Non-citable documents | 2021 | 20 |
Non-citable documents | 2022 | 17 |
Non-citable documents | 2023 | 10 |
Citable documents | 1999 | 363 |
Citable documents | 2000 | 283 |
Citable documents | 2001 | 217 |
Citable documents | 2002 | 224 |
Citable documents | 2003 | 234 |
Citable documents | 2004 | 263 |
Citable documents | 2005 | 280 |
Citable documents | 2006 | 321 |
Citable documents | 2007 | 343 |
Citable documents | 2008 | 330 |
Citable documents | 2009 | 324 |
Citable documents | 2010 | 293 |
Citable documents | 2011 | 306 |
Citable documents | 2012 | 321 |
Citable documents | 2013 | 377 |
Citable documents | 2014 | 410 |
Citable documents | 2015 | 399 |
Citable documents | 2016 | 350 |
Citable documents | 2017 | 298 |
Citable documents | 2018 | 343 |
Citable documents | 2019 | 449 |
Citable documents | 2020 | 568 |
Citable documents | 2021 | 571 |
Citable documents | 2022 | 542 |
Citable documents | 2023 | 490 |
Ratio of a journal's items, grouped in three years windows, that have been cited at least once vs. those not cited during the following year.
Documents | Year | Value |
---|---|---|
Uncited documents | 1999 | 201 |
Uncited documents | 2000 | 156 |
Uncited documents | 2001 | 134 |
Uncited documents | 2002 | 142 |
Uncited documents | 2003 | 134 |
Uncited documents | 2004 | 143 |
Uncited documents | 2005 | 132 |
Uncited documents | 2006 | 137 |
Uncited documents | 2007 | 158 |
Uncited documents | 2008 | 148 |
Uncited documents | 2009 | 119 |
Uncited documents | 2010 | 95 |
Uncited documents | 2011 | 94 |
Uncited documents | 2012 | 129 |
Uncited documents | 2013 | 129 |
Uncited documents | 2014 | 132 |
Uncited documents | 2015 | 134 |
Uncited documents | 2016 | 103 |
Uncited documents | 2017 | 91 |
Uncited documents | 2018 | 105 |
Uncited documents | 2019 | 133 |
Uncited documents | 2020 | 151 |
Uncited documents | 2021 | 141 |
Uncited documents | 2022 | 137 |
Uncited documents | 2023 | 103 |
Cited documents | 1999 | 164 |
Cited documents | 2000 | 128 |
Cited documents | 2001 | 83 |
Cited documents | 2002 | 87 |
Cited documents | 2003 | 108 |
Cited documents | 2004 | 134 |
Cited documents | 2005 | 157 |
Cited documents | 2006 | 192 |
Cited documents | 2007 | 193 |
Cited documents | 2008 | 192 |
Cited documents | 2009 | 216 |
Cited documents | 2010 | 205 |
Cited documents | 2011 | 218 |
Cited documents | 2012 | 199 |
Cited documents | 2013 | 255 |
Cited documents | 2014 | 291 |
Cited documents | 2015 | 277 |
Cited documents | 2016 | 260 |
Cited documents | 2017 | 215 |
Cited documents | 2018 | 249 |
Cited documents | 2019 | 331 |
Cited documents | 2020 | 441 |
Cited documents | 2021 | 450 |
Cited documents | 2022 | 422 |
Cited documents | 2023 | 397 |
Evolution of the percentage of female authors.
Year | Female Percent |
---|---|
1999 | 14.94 |
2000 | 13.51 |
2001 | 15.92 |
2002 | 13.83 |
2003 | 17.11 |
2004 | 15.79 |
2005 | 19.34 |
2006 | 15.00 |
2007 | 20.24 |
2008 | 15.43 |
2009 | 18.53 |
2010 | 19.73 |
2011 | 22.19 |
2012 | 19.62 |
2013 | 16.90 |
2014 | 18.06 |
2015 | 16.99 |
2016 | 21.72 |
2017 | 19.29 |
2018 | 21.38 |
2019 | 19.50 |
2020 | 20.00 |
2021 | 18.18 |
2022 | 22.57 |
2023 | 23.94 |
Evolution of the number of documents cited by public policy documents according to Overton database.
Documents | Year | Value |
---|---|---|
Overton | 1999 | 1 |
Overton | 2000 | 1 |
Overton | 2001 | 0 |
Overton | 2002 | 0 |
Overton | 2003 | 0 |
Overton | 2004 | 0 |
Overton | 2005 | 3 |
Overton | 2006 | 1 |
Overton | 2007 | 0 |
Overton | 2008 | 2 |
Overton | 2009 | 1 |
Overton | 2010 | 1 |
Overton | 2011 | 0 |
Overton | 2012 | 0 |
Overton | 2013 | 2 |
Overton | 2014 | 3 |
Overton | 2015 | 1 |
Overton | 2016 | 0 |
Overton | 2017 | 2 |
Overton | 2018 | 2 |
Overton | 2019 | 4 |
Overton | 2020 | 0 |
Overton | 2021 | 0 |
Overton | 2022 | 0 |
Overton | 2023 | 0 |
Evoution of the number of documents related to Sustainable Development Goals defined by United Nations. Available from 2018 onwards.
Documents | Year | Value |
---|---|---|
SDG | 2018 | 30 |
SDG | 2019 | 36 |
SDG | 2020 | 26 |
SDG | 2021 | 16 |
SDG | 2022 | 20 |
SDG | 2023 | 12 |
Leave a comment
Name * Required
Email (will not be published) * Required
* Required Cancel
The users of Scimago Journal & Country Rank have the possibility to dialogue through comments linked to a specific journal. The purpose is to have a forum in which general doubts about the processes of publication in the journal, experiences and other issues derived from the publication of papers are resolved. For topics on particular articles, maintain the dialogue through the usual channels with your editor.
Follow us on @ScimagoJR Scimago Lab , Copyright 2007-2024. Data Source: Scopus®
Cookie settings
Cookie Policy
Legal Notice
Privacy Policy
Journal of Parallel and Distributed Computing
Volume 12 • Issue 12
- ISSN: 0743-7315
Editor-In-Chief: V.K. Prasanna
- 5 Year impact factor: 3.4
- Impact factor: 3.4
- Journal metrics
Researchers interested in submitting a special issue proposal should adhere to the submission g… Read more
Subscription options
Institutional subscription on sciencedirect.
Researchers interested in submitting a special issue proposal should adhere to the submission guidelines.
This international journal is directed to researchers, engineers, educators, managers, programmers, and users of computers who have particular interests in parallel processing and/or distributed computing .
The Journal of Parallel and Distributed Computing publishes original research papers and timely review articles on the theory, design, evaluation, and use of parallel and/or distributed computing systems. The journal also features special issues on these topics; again covering the full range from the design to the use of our targeted systems.
Research Areas Include:
• Theory of parallel and distributed computing • Parallel algorithms and their implementation • Innovative computer architectures • Parallel programming • Applications, algorithms and platforms for accelerators • Cloud, edge and fog computing • Data-intensive platforms and applications • Parallel processing of graph and irregular applications • Parallel and distributed programming models • Software tools and environments for distributed systems • Algorithms and systems for Internet of Things • Performance analysis of parallel applications • Architecture for emerging technologies e.g., novel memory technologies, quantum computing • Application-specific architectures e.g., accelerator-based and reconfigurable architecture • Interconnection network, router and network interface architecture
Benefits to authors We also provide many author benefits, such as free PDFs, a liberal copyright policy, special discounts on Elsevier publications and much more. Please click here for more information on our author services .
Please see our Guide for Authors for information on article submission. If you require any further information or help, please visit our Support Center
Fast, Accurate and Distributed Simulation of novel HPC systems incorporating ARM and RISC-V CPUs
New citation alert added.
This alert has been successfully added and will be sent to:
You will be notified whenever a record that you have chosen has been cited.
To manage your alert preferences, click on the button below.
New Citation Alert!
Please log in to your account
Information & Contributors
Bibliometrics & citations, view options, index terms.
Computing methodologies
Modeling and simulation
Simulation support systems
Simulation environments
Simulation tools
Simulation types and techniques
Massively parallel and high-performance simulations
Electronic design automation
Modeling and parameter extraction
Emerging technologies
Analysis and design of emerging devices and systems
Emerging simulation
Network performance evaluation
Network simulations
Software and its engineering
Software organization and properties
Software system structures
Software architectures
Simulator / interpreter
Recommendations
Performance evaluation of various risc processor systems: a case study on arm, mips and risc-v.
RISC-V is a new instruction set architecture (ISA) that has emerged in recent years. Compared with previous computer instruction architectures, RISC-V has outstanding features such as simple instructions, modular instruction set and supporting ...
Low overhead dynamic binary translation on ARM
The ARMv8 architecture introduced AArch64, a 64-bit execution mode with a new instruction set, while retaining binary compatibility with previous versions of the ARM architecture through AArch32, a 32-bit execution mode. Most hardware implementations ...
A SMT-ARM simulator and performance evaluation
Exponential growth in the number of on-chip transistors with smaller size, make each generation of embedded microprocessors capable to supply more processing ability. In this paper a microarchitecture approach is proposed to make a simultaneous ...
Information
Published in.
- Patrizio Dazzi ,
- Gabriele Mencagli ,
- Program Chair:
- David Lowenthal ,
- Program Co-chair:
- Rosa M Badia
- SIGARCH: ACM Special Interest Group on Computer Architecture
In-Cooperation
- SIGHPC: ACM Special Interest Group on High Performance Computing, Special Interest Group on High Performance Computing
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates, author tags.
- hpc simulator
- distributed systems simulator
- Short-paper
Funding Sources
- RED-SEA EuroHPC
- Vitamin-V Horizon Europe
Acceptance Rates
Contributors, other metrics, bibliometrics, article metrics.
- 0 Total Citations
- 4 Total Downloads
- Downloads (Last 12 months) 4
- Downloads (Last 6 weeks) 5
View options
View or Download as a PDF file.
View online with eReader .
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Full Access
Share this publication link.
Copying failed.
Share on social media
Affiliations, export citations.
- Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
- Download citation
- Copy citation
We are preparing your search results for download ...
We will inform you here when the file is ready.
Your file of search results citations is now ready.
Your search export query has expired. Please try again.
IEEE Account
- Change Username/Password
- Update Address
Purchase Details
- Payment Options
- Order History
- View Purchased Documents
Profile Information
- Communications Preferences
- Profession and Education
- Technical Interests
- US & Canada: +1 800 678 4333
- Worldwide: +1 732 981 0060
- Contact & Support
- About IEEE Xplore
- Accessibility
- Terms of Use
- Nondiscrimination Policy
- Privacy & Opting Out of Cookies
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.
CS 87: Parallel and Distributed Computing — Fall 2021
Announcements, course goals, course structure, class schedule, required readings, about the cs labs, working with partners, absence / assignment extension policy, academic accommodations, academic integrity policy, class resources, distributed systems, parallel systems, and cluster links, reading, writing and presentation advice, unix and programming resources.
Project Demo signup sheet is available. See the Part 3 assignment page.
Course Project Part3 assignments: project presentation, project report, project demo, and project code
Class announcements will be posted here and/or sent as email. You are expected to check your email frequently for class announcements. Questions and answers about assignments should be posted on EdStem page. All assignments are posted to the class Class Schedule . Here is a shortcut link to the current week’s assginments
Professor: Tia Newhall ,
Office hours: Wednesdays 2-4, and by appointment, Sci 249
Class Meetings::
Paper Discussion: Mondays, Section A: 1:15-2:45, Section B: 3:00-4:30, Sci 104
Lecture: Tuesdays, 1:15-2:30, Sci 104
Lab: Thursdays, 1:15-2:30, Sci 256
EdStem: Q&A Forum
GitHub: CS87 Swarthmore GitHub Org
Course Description
This course covers a broad range of topics related to parallel and distributed computing, including parallel and distributed architectures and systems, parallel and distributed programming paradigms, parallel algorithms, and scientific and other applications of parallel and distributed computing. In lecture/discussion sections, students examine both classic results as well as recent research in the field. The lab portion of the course includes programming projects using different programming paradigms, and students will have the opportunity to examine one course topic in depth through an open-ended project of their own choosing. Course topics may include: multi-core, SMP, MMP, client-server, clusters, clouds, grids, peer-to-peer systems, GPU computing, scheduling, scalability, resource discovery and allocation, fault tolerance, security, parallel I/0, sockets, threads, message passing, MPI, RPC, distributed shared memory, data parallel languages, MapReduce, parallel debugging, and applications of parallel and distributed computing.
Class will be run as a combination of lecture and seminar-style discussion. During the discussion based classes, students will read research papers prior to the class meeting that we will discuss in class. The first half of the course will focus on different parallel and distributed programming paradigms. During the second half, students will propose and carry out a semester-long research project related to parallel and/or distributed computing.
Prereqs : CS31 and CS35 required; prior upper-level CS course experience required. Designated: NSE, W (Writing Course), CS Group 2 Course
Analyze and critically discuss research papers both in writing and in class
Formulate and evaluate a hypothesis by proposing, implementing and testing a project
Relate one’s project to prior research via a review of related literature
Write a coherent, complete paper describing and evaluating a project
Orally present a clear and accessible summary of a research work
Understand the fundamental questions in parallel and distributed computing and analyze different solutions to these questions
Understand different parallel and distributed programming paradigms and algorithms, and gain practice in implementing and testing solutions using these.
CS87 is a seminar-style course. Its structure is designed as a bridge from lecture-based learning to inquiry-based cooperative learning that is the norm in post-Swarthmore experiences, be it graduate studies or work in industry. Although there will be some lecture, all work in this class is cooperative . This includes working in small groups to solve problems, to prepare for class discussion, to produce solutions to written and lab assigmnets, to deliver presentations, and to carry out all parts of the course project, The result of this type of course structure is that you are directly responsible for a large part of the success or failure of this class.
This is a very tentative schedule. It will be updated as we go.
- Introduction to Parallel and Distributed Computing
Weekly Reading :
- Chapt 5.9 (CPUs today)
- Chapt 15 intro (parallel systems)
- Parallel Computing Sections A-C (overview-arch)
- Intro to Distributed Computing, sections 1-2
Lab 0 : resources review for CS87
Paper 1 : parallel system
Lab 1 : pthreads scalability
- Parallel and Distributed Systems
- Chapt 14.4 Performance Meausres
- Read Reading Group and Reaction Notes Guide
- Reading Groups and Papers
- Paper 2 : End-to-End, NW protocols
- Thurs : experiment tools
Drop/add ends
- Distributed Systems
- Intro to Distributed Computing, read sections 1-2, skim 3
- Chapt 15.2-15.2.2 (distributed memory)
Paper 3 : Parallel Languages
Lab 2 : client/server
- Parallel Languages
- Section D: programming models
- skim Chapt 15.2.3 (MPI)
- skim Chapt 14.7 (openMP)
Paper 4 : Heterogeneous Computing
Thurs : part 1 demo
- Parallel Languages and Algorithms
- GPGPU Computing
- skim Section E.i-E.ix: parallel design
- Chapt 15.1 (gpus and cuda)
Paper 5 : Map Reduce
in lab : cuda examples
Lab 3 : cuda
- Parallel Languages/Algorithms
- skim Chapt 15.3 (cloud, MapReduce)
- Chapt 15.2 (MPI examples)
Paper 6 : Quick Read
XSEDE : get an XSEDE account (do Step 1)
Lab 4 : mpi
Course Project : general info
- Parallel Algorithms
- Peer-to-Peer Systems
- review Chapt 11.1 (memory heirarchy)
Paper 7 : Distributed Shared Memory
Sign-up : Lab 3 Demo
XSEDE : Do Steps 1-3 before Thurs
Course Project : proposal assignment
Lab 4.B : experiments
- Distributed Shared Memory
- review Chapt 14.7 (openMP)
Paper 8 : GFS
- Distributed File Systems
- Paper 9 : Cloud Computing
- Cloud Computing
- Green Computing
- Paper 10 : Failure
- Course Project : about project work week
- Project Work Week
- Course Project : Poject Work Week
- Course Project : Midway Report
Thanksgiving break (Nov 25)
- Paper 11 : Edge Computing
Course Project : Midway presentation
- Course Project : Project Part 3
- Edge Computing
Paper 12 : Security
Final Presentations, 2-5pm, 7-10pm, Sci Cntr 104
About Course Work/Policies
All work in CS87 will be done with a partner or in a small group. I will assign you to a reading group and I will assign partners for the lab assignments in the first half of the semester. You may pick your own project group for the course project. Project groups typically are two or three students in size. No solo course projects are allowed .
Most lab solutions you will submit electronically with git: you must do a git add, commit, and push before the due date to submit your solution on time. You may push your assignment multiple times, and a history of previous submissions will be saved. You are encouraged to push your work regularly.
Some assignments have additional submission requirements.
There is no required textbook for this courses. Instead, there will be required readings from on-line resources posted to the class schedule. In addition, we will also read and discuss one or two research papers most weeks. Research paper assignments will be posted to the class schedule and also be listed off the paper reading schedule (available week 2). The paper assigments will be updated over the course of the semester, so check this page weekly.
You will be assigned to a reading group for the semester. Your reading group will meet weekly to:
discuss the weekly assigned paper(s) before the in-class discussion.
write Reaction Notes to the assigned papers ( submit before 9am on Monday discussion day , and bring with you to class class discussion).
There may also be some on-line background readings off the Class Schedule related to the weekly assigned paper(s).
I encourage you to work in one of the CS labs when possible (vs. remotely logging in), particularly when working with a partner. The CS labs (rooms 240, 256, 238 and Clother basement) are open 24 hours a day, 7 days a week for you to use for CS course work. With the exception of times when a class, or lab, or ninja session is scheduled in one of these rooms, you may work anytime in a CS lab on CS course work. The overflow lab (238) is always available.
CS lab machines are for CS course work only. There are other computer lab/locations on campus that are for general-purpose computer use. Please review the CS Lab Rules about appropriate use of CS labs.
Info about CS lab machines
A list of CS lab machines specs is available here: CS lab machine specs
To see a listing of which machines are in which lab, look at hosts files:
To find the lab in which a machine is located run whereis :
Accessing the CS labs after hours
You can use your ID to gain access to the computer labs at nights and on the weekends. Just wave your ID over the onecard reader next to the lab doors. When the green light goes on, just push on the door handle to get in (the door knob will not turn). If you have issues with the door locks, send an email to [email protected] . If the building is locked, you can use your ID to enter the door between Martin and Cornell library. For this class, your ID will give you access to the labs in rooms SCI 238, 240, 256, and the Clothier basement.
For partnered lab assignments, you should follow these guidelines:
The expectation is that you and your partner are working together side by side in the lab for most, if not all, of the time you work on partnered lab assignments.
You and your partner should work on all aspects of the project together: initial top-down design, incremental testing and debugging, and final testing and code review.
If you are pair programming, where one of you types and one of you watches and assists, then you should swap roles periodically, taking turns doing each part.
At the end of a joint editing session, or as you change roles within a session, make sure that the "driver" does a git add , git commit and git push to push your changes to your shared repo, and your partner does a git pull to grab them so that both you and your partner have the latest version of your joint work in your local copies of your repo.
There may be short periods of time where you each go off and implement some small part independently. However, you should frequently come back together, talk through your changes, push and pull each other’s code from the git repository, and test your merged code together.
You should not delete or significantly alter code written by your partner when they are not present. If there is a problem in the code, then meet together to resolve it.
You and your partner are both equally responsible for initiating scheduling times when you can meet to work together, and for making time available in your schedule for working together.
If there are any issues with your partnership that you are unable to resolve, please come see me.
Taking time to design a plan for your solution together and to doing incremental implementation and testing together may seem like it is a waste of time, but in the long run it will save you a lot of time by making it less likely that you have design or logic errors in your solution, and by having a partner to help track down bugs and to help come up with solutions to problems.
Partnerships where partners work mostly independently rarely work out well and rarely result in complete, correct and robust solutions. Partnerships where partners work side-by-side for all or most of the time tend to work out very well.
All students start the course with 2 "late lab" day to be used on lab assignments only at your discretion, with no questions asked. You may not use late days on any other course work; all other course work must be submitted by its due date.
To use a late day on a lab assignment, you must email your professor after you have completed the lab and pushed to your repository. You do not need to inform anyone ahead of time. When you use late time, you should still expect to work on the newly-released lab during the following lab section meeting. The professor will always prioritize answering questions related to the current lab assignment.
Your late days will be counted at the granularity of full days and will be tracked on a per-student (NOT per-partnership) basis . That is, if you turn in an assignment five minutes after the deadline, it counts as using one day. For partnered labs, using a late day counts towards the late days for each partner. In the rare cases in which only one partner has unused late days, that partner’s late days may be used, barring a consistent pattern of abuse.
If you feel that you need an extension on an assignment or that you are unable to attend class for two or more meetings due to a medical condition (e.g., extended illness, concussion, hospitalization) or other emergency, you must contact the dean’s office and your instructors. Faculty will coordinate with the deans to determine and provide the appropriate accommodations. Note that for illnesses, the College’s medical excuse policy , states that you must be seen and diagnosed by the Worth Health Center if you would like them to contact your class dean with corroborating medical information.
Late days cannot be used for any written assignments or oral presentations in class. Reaction notes and all project components must be submitted on time.
If you believe you need accommodations for a disability or a chronic medical condition, please contact Student Disability Services via email at [email protected] to arrange an appointment to discuss your needs. As appropriate, the office will issue students with documented disabilities or medical conditions a formal Accommodations Letter. Since accommodations require early planning and are not retroactive, please contact Student Disability Services as soon as possible. For details about the accommodations process, visit the Student Disability Service Website .
You are also welcome to contact me privately to discuss your academic needs. However, all disability-related accommodations must be arranged, in advance, through Student Disability Services. To receive an accommodation for a course activity you must have an official Accommodations Letter and you need to meet with me to work out the details of your accommodation at least two weeks prior to any activity requiring accommodations .
The CS Department Academic Integrity Policy:
Academic honesty is required in all your work. Under no circumstances may you hand in work done with or by someone else under your own name. Discussing ideas and approaches to problems with others on a general level is encouraged , but you should never share your solutions with anyone else nor allow others to share solutions with you. You may not examine solutions belonging to someone else, nor may you let anyone else look at or make a copy of your solutions. This includes, but is not limited to, obtaining solutions from students who previously took the course or solutions that can be found online. You may not share information about your solution in such a manner that a student could reconstruct your solution in a meaningful way (such as by dictation, providing a detailed outline, or discussing specific aspects of the solution). You may not share your solutions even after the due date of the assignment.
In your solutions, you are permitted to include material which was distributed in class, material which is found in the course textbook, and material developed by or with an assigned partner. In these cases, you should always include detailed comments indicating on which parts of the assignment you received help and what your sources were.
When working on tests, exams, or similar assessments, you are not permitted to communicate with anyone about the exam during the entire examination period (even if you have already submitted your work). You are not permitted to use any resources to complete the exam other than those explicitly permitted by course policy. (For instance, you may not look at the course website during the exam unless explicitly permitted by the instructor when the exam is distributed.)
Failure to abide by these rules constitutes academic dishonesty and will lead to a hearing of the College Judiciary Committee. According to the Faculty Handbook:
Because plagiarism is considered to be so serious a transgression, it is the opinion of the faculty that for the first offense, failure in the course and, as appropriate, suspension for a semester or deprivation of the degree in that year is suitable; for a second offense, the penalty should normally be expulsion.
This policy applies to all course work, including but not limited to code, written solutions (e.g. proofs, analyses, reports, etc.), exams, and so on. This is not meant to be an enumeration of all possible violations; students are responsible for seeking clarification if there is any doubt about the level of permissible communication.
The general ethos of this policy is that actions which shortcut the learning process are forbidden while actions which promote learning are encouraged. Studying lecture materials together, for example, provides an additional avenue for learning and is encouraged. Using a classmate’s solution, however, is prohibited because it avoids the learning process entirely. If you have any questions about what is or is not permissible, please contact your instructor.
In CS87 all of your work is collaborative, and you may share anything with your collaborators. If you use tools, solutions, ideas, or resources from other sources in your project, you must credit those sources. Reaction notes should be your interpretation of the work in the papers. Cutting and pasting content from the papers as your reactions violates the academic integrity policy.
Please contact me if you have any questions about what is permissible in this course.
Grades will be weighted as follows:
approx 35%: Class Participation, Paper discussion, Reaction Notes Each week we will discuss papers in a seminar style format. This course has some lecture content with in-class activities too, but much of the course content is generated by the class as a whole. You need to be present to contribute. Doing a close reading of the assigned papers and writing responses and questions prior to each class is essential to preparing for a lively and informed discussion. In this style of class, you are responsible for contributing to its content; what you get out of this class is dependent on what you put into paper discussions, presentations and other class participation.
approx 25%: Labs I will assign labs in the first part of the semester. You will do lab work with different assigned partners for each lab.
40%: Course Project You will design a project related to parallel and distributed computing that you will carry out over the second half of the semester. Projects must be done in pairs or small groups; no solo projects are allowed.
Reaction Notes Information
Assigned papers, reaction notes questions, reading groups
CS87 EdStem Q&A Forum
CS87 GitHub Org
Help using git for lab assignments
more verbose git setup for CS labs (written for CS31 students)
Introduction to Parallel Computing from Livermore Computing
Links to Parallel and Network Programming Resources MPI, OpenMP, posix threads, socket programming, CUDA…
Some Cluster and Distributed Systems Papers
IEEE Distributed Systems Online
The Top 500 List
The Green 500 List
ParaScope IEEE Listing of Parallel Computing Sites
Research and Writing Guide the guide for doing CS course projects and written project reports
Tips for preparing a research presentation
How to read a CS research paper
Other reading, writing, presentation links
Swarthmore Writing Center and Writing Associates
My CS and Unix Help Pages and Links (make, tar, git, degugging tools, editors, programming guides, linux and parallel computing links, …)
CS Department’s Help Pages
Dive into Systems Textbook (C programming, debugging, computer systems, intro to parallel programming, …)
CS Project Etiquette tools and guidelines for using shared CS resources to run long-running, intensive applications.
Tools for running large Experiements (screen, nice, profing tools, script, …)
Information
- Author Services
Initiatives
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
- Active Journals
- Find a Journal
- Proceedings Series
- For Authors
- For Reviewers
- For Editors
- For Librarians
- For Publishers
- For Societies
- For Conference Organizers
- Open Access Policy
- Institutional Open Access Program
- Special Issues Guidelines
- Editorial Process
- Research and Publication Ethics
- Article Processing Charges
- Testimonials
- Preprints.org
- SciProfiles
- Encyclopedia
Article Menu
- Subscribe SciFeed
- Recommended Articles
- Author Biographies
- Google Scholar
- on Google Scholar
- Table of Contents
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
JSmol Viewer
Fuzzy modelling algorithms and parallel distributed compensation for coupled electromechanical systems.
1. Introduction
1.1. related work, 1.2. contributions, 2. simulation platform, 3. t–s fuzzy modelling, 3.1. fuzzy identification of a second-order system.
Fuzzy system identification and m parameter optimization. |
← Load open_loop_experimental_data.mat ▹ Number of rules or clusters Total number of data fuzzy c-means( ) i ← 1 to c ▹ Obtaining of consequents Equation ( ) ▹ Least squares PSO Algorithm ▹ Open Loop System m← 1.1 to 3 ▹ Fuzzy Parameter m using Equation ( ) ▹ Objective function m |
3.2. Optimization of the Fuzzy Parameter m
Pole placement optimization and PDC control. |
PSO Algorithm ▹ Close Loop System to ▹ Poles using Equation ( ) ▹ Objective function for each using pole assignment PDC Control( ) i ← 1 to N ) ) ) ) |
3.3. Closed-Loop System Poles Optimization
4. pdc fuzzy control, comparison of implementing a pdc vs. pd controller, 5. conclusions, author contributions, data availability statement, acknowledgments, conflicts of interest, abbreviations.
ICE | Internal Combustion Engine |
MSE | Mean Square Error |
MSO | Mean Square Output |
PDC | Parallel Distributed Controller |
PGS | Power Generation System |
PSO | Particle Swarm Optimization |
QoF | Quality of Fit |
T–S | Takagi–Sugeno |
- Zaccone, R.; Campora, U.; Martelli, M. Optimisation of a diesel-electric ship propulsion and power generation system using a genetic algorithm. J. Mar. Sci. Eng. 2021 , 9 , 587. [ Google Scholar ] [ CrossRef ]
- Xie, Y.; Savvaris, A.; Tsourdos, A. Fuzzy logic based equivalent consumption optimization of a hybrid electric propulsion system for unmanned aerial vehicles. Aerosp. Sci. Technol. 2019 , 85 , 13–23. [ Google Scholar ] [ CrossRef ]
- Aliramezani, M.; Koch, C.R.; Shahbakhti, M. Modeling, diagnostics, optimization, and control of internal combustion engines via modern machine learning techniques: A review and future directions. Prog. Energy Combust. Sci. 2022 , 88 , 100967. [ Google Scholar ] [ CrossRef ]
- Bhatt, A.N.; Shrivastava, N. Application of artificial neural network for internal combustion engines: A state of the art review. Arch. Comput. Methods Eng. 2022 , 29 , 897–919. [ Google Scholar ] [ CrossRef ]
- Sun, L.; You, F. Machine learning and data-driven techniques for the control of smart power generation systems: An uncertainty handling perspective. Engineering 2021 , 7 , 1239–1247. [ Google Scholar ] [ CrossRef ]
- Zhao, S.; Blaabjerg, F.; Wang, H. An overview of artificial-intelligence applications for power electronics. IEEE Trans. Power Electron. 2020 , 36 , 4633–4658. [ Google Scholar ] [ CrossRef ]
- Karamanakos, P.; Liegmann, E.; Geyer, T.; Kennel, R. Model predictive control of power electronic systems: Methods, results, and challenges. IEEE Open J. Ind. Appl. 2020 , 1 , 95–114. [ Google Scholar ] [ CrossRef ]
- Abdalla, A.N.; Nazir, M.S.; Tao, H.; Cao, S.; Ji, R.; Jiang, M.; Yao, L. Integration of energy storage system and renewable energy sources based on artificial intelligence: An overview. J. Energy Storage 2021 , 40 , 102811. [ Google Scholar ] [ CrossRef ]
- Di Giovine, G.; Mariani, L.; Di Battista, D.; Cipollone, R.; Fremondi, F. Modeling and experimental validation of a triple-screw pump for internal combustion engine cooling. Appl. Therm. Eng. 2021 , 199 , 117550. [ Google Scholar ] [ CrossRef ]
- Amoiralis, E.I.; Tsili, M.A.; Spathopoulos, V.; Hatziefremidis, A. Energy efficiency optimization in uavs: A review. In Proceedings of the Materials Science Forum, Xi’an, China, 8–9 August 2014; Trans Tech Publications Ltd.: Stafa-Zurich, Switzerland, 2014; Volume 792, pp. 281–286. [ Google Scholar ]
- Technologies, T.F. Hybrid VTOL: Increased Energy Density for Increased Payload and Endurance. 2017. Available online: https://vtol.org/files/dmfile/24-TVF2018-Debitetto-TopFlight-Jan191.pdf (accessed on 4 March 2024).
- Sullivan UV. Acutronic Power Systems. 2019. Available online: https://www.sullivanuv.com/ (accessed on 10 April 2024).
- Babuška, R. Fuzzy Modeling for Control ; Springer: Dordrecht, The Netherlands, 1998. [ Google Scholar ] [ CrossRef ]
- Zhang, X.; Liu, L.; Dai, Y. Fuzzy state machine energy management strategy for hybrid electric UAVs with PV/fuel cell/battery power system. Int. J. Aerosp. Eng. 2018 , 2018 , 1–16. [ Google Scholar ] [ CrossRef ]
- Remoaldo, D.; Jesus, I. Analysis of a Traditional and a Fuzzy Logic Enhanced Perturb and Observe Algorithm for the MPPT of a Photovoltaic System. Algorithms 2021 , 14 , 24. [ Google Scholar ] [ CrossRef ]
- Bezdek, J.C. A physical interpretation of fuzzy ISODATA. In Readings in Fuzzy Sets for Intelligent Systems ; Elsevier: Amsterdam, The Netherlands, 1993; pp. 615–616. [ Google Scholar ] [ CrossRef ]
- Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms ; Springer Science & Business Media: New York, NY, USA, 2013. [ Google Scholar ]
- Wu, K.L. Analysis of parameter selections for fuzzy c-means. Pattern Recognit. 2012 , 45 , 407–415. [ Google Scholar ] [ CrossRef ]
- Zhou, K.; Fu, C.; Yang, S. Fuzziness parameter selection in fuzzy c-means: The perspective of cluster validation. Sci. China Inf. Sci. 2014 , 57 , 1–8. [ Google Scholar ] [ CrossRef ]
- Shrivastava, N.; Khan, Z.M. Application of soft computing in the field of internal combustion engines: A review. Arch. Comput. Methods Eng. 2018 , 25 , 707–726. [ Google Scholar ] [ CrossRef ]
- Khang, T.D.; Tran, M.K.; Fowler, M. A Novel Semi-Supervised Fuzzy C-Means Clustering Algorithm Using Multiple Fuzzification Coefficients. Algorithms 2021 , 14 , 258. [ Google Scholar ] [ CrossRef ]
- Manjarrez, L.H.; Ramos-Fernández, J.C.; Espinoza, E.S.; Lozano, R. Estimation of Energy Consumption and Flight Time Margin for a UAV Mission Based on Fuzzy Systems. Technologies 2023 , 11 , 12. [ Google Scholar ] [ CrossRef ]
- You, D.; Lei, Y.; Liu, S.; Zhang, Y.; Zhang, M. Networked Control System Based on PSO-RBF Neural Network Time-Delay Prediction Model. Appl. Sci. 2022 , 13 , 536. [ Google Scholar ] [ CrossRef ]
- Shouran, M.; Alsseid, A. Particle swarm optimization algorithm-tuned fuzzy cascade fractional order PI-fractional order PD for frequency regulation of dual-area power system. Processes 2022 , 10 , 477. [ Google Scholar ] [ CrossRef ]
- Benevieri, A.; Carbone, L.; Cosso, S.; Kumar, K.; Marchesoni, M.; Passalacqua, M.; Vaccaro, L. Series Architecture on Hybrid Electric Vehicles: A Review. Energies 2021 , 14 , 7672. [ Google Scholar ] [ CrossRef ]
- Takagi, T.; Sugeno, M. Fuzzy identification of systems and its applications to modeling and control. IEEE Trans. Syst. Man Cybern. 1985 , SMC-15 , 116–132. [ Google Scholar ] [ CrossRef ]
- Nayak, J.; Naik, B.; Behera, H. Fuzzy C-Means (FCM) Clustering Algorithm: A Decade Review from 2000 to 2014. In Proceedings of the Computational Intelligence in Data Mining, Bhubaneswar, India, 5–6 December 2015; Springer: New Delhi, India, 2014; Volume 2, pp. 133–149. [ Google Scholar ]
- Setnes, M.; Babuska, R.; Verbruggen, H.B. Rule-based modeling: Precision and transparency. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 1998 , 28 , 165–169. [ Google Scholar ] [ CrossRef ]
- Lilly, J.H. Fuzzy Control and Identification ; John Wiley & Sons: Hoboken, NJ, USA, 2010. [ Google Scholar ] [ CrossRef ]
- Rubio, I.; Guijarro, G.; Garcia, L.; Hespanha, J.; Xie, J. Translational Model Identification and Robust Control for the Parrot Mambo UAS. In Proceedings of the IEEE GLOBECOM 2019 Workshop on Computing-Centric Drone Networks, Waikoloa, HI, USA, 9–14 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [ Google Scholar ] [ CrossRef ]
- Hespanha, J.P. Advanced Undergraduate Topics in Control Systems Design. 2023. Available online: https://web.ece.ucsb.edu/~hespanha/published/allugtopics-20230402.pdf (accessed on 13 January 2024).
- Hashim, H.A.; El-Ferik, S.; Abido, M.A. A fuzzy logic feedback filter design tuned with PSO for L1 adaptive controller. Expert Syst. Appl. 2015 , 42 , 9077–9085. [ Google Scholar ] [ CrossRef ]
- Praseeda, C.; Shivakumar, B. Fuzzy particle swarm optimization (FPSO) based feature selection and hybrid kernel distance based possibilistic fuzzy local information C-means (HKD-PFLICM) clustering for churn prediction in telecom industry. SN Appl. Sci. 2021 , 3 , 613. [ Google Scholar ] [ CrossRef ]
- Clerc, M.; Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002 , 6 , 58–73. [ Google Scholar ] [ CrossRef ]
- Wang, H.O.; Tanaka, K. Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach ; John Wiley & Sons: New York, NY, USA, 2004. [ Google Scholar ] [ CrossRef ]
Click here to enlarge figure
Centroids | u | ||
---|---|---|---|
1 | 45.5960 | 45.5858 | 0.7158 |
2 | 38.8837 | 38.8710 | 0.6258 |
3 | 21.2521 | 21.2416 | 0.3369 |
4 | 25.3004 | 23.9919 | 0.4860 |
0 | 1 | 0 | 0 | 1 | 0 |
0.8465 | 14.9186 | 1.0667 | 3.7456 | ||
0 | 1 | 0 | 0 | 1 | 0 |
0.9635 | 7.0516 | 0.9589 | 9.4053 |
Optimal Value m | Bounds | # of Particles | Inertia Coefficient | # of Iterations | |
---|---|---|---|---|---|
2.97 | 300 | 0.0917 | 0.5959 | 100 |
Optimal Poles | Bounds | # of Particles | Inertia Coefficient | # of Iterations | |
---|---|---|---|---|---|
, | 300 | 100 |
ISE | IAE | ITAE | |
---|---|---|---|
With PSO | 2.6677 | 51.6501 | 8.02 |
Without PSO | 6.9970 | 2.6452 | 2.1061 |
Gains | Values |
---|---|
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Share and Cite
Reyes, C.; Ramos-Fernández, J.C.; Espinoza, E.S.; Lozano, R. Fuzzy Modelling Algorithms and Parallel Distributed Compensation for Coupled Electromechanical Systems. Algorithms 2024 , 17 , 391. https://doi.org/10.3390/a17090391
Reyes C, Ramos-Fernández JC, Espinoza ES, Lozano R. Fuzzy Modelling Algorithms and Parallel Distributed Compensation for Coupled Electromechanical Systems. Algorithms . 2024; 17(9):391. https://doi.org/10.3390/a17090391
Reyes, Christian, Julio C. Ramos-Fernández, Eduardo S. Espinoza, and Rogelio Lozano. 2024. "Fuzzy Modelling Algorithms and Parallel Distributed Compensation for Coupled Electromechanical Systems" Algorithms 17, no. 9: 391. https://doi.org/10.3390/a17090391
Article Metrics
Article access statistics, further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
Help | Advanced Search
Computer Science > Distributed, Parallel, and Cluster Computing
Title: rapid gpu-based pangenome graph layout.
Abstract: Computational Pangenomics is an emerging field that studies genetic variation using a graph structure encompassing multiple genomes. Visualizing pangenome graphs is vital for understanding genome diversity. Yet, handling large graphs can be challenging due to the high computational demands of the graph layout process. In this work, we conduct a thorough performance characterization of a state-of-the-art pangenome graph layout algorithm, revealing significant data-level parallelism, which makes GPUs a promising option for compute acceleration. However, irregular data access and the algorithm's memory-bound nature present significant hurdles. To overcome these challenges, we develop a solution implementing three key optimizations: a cache-friendly data layout, coalesced random states, and warp merging. Additionally, we propose a quantitative metric for scalable evaluation of pangenome layout quality. Evaluated on 24 human whole-chromosome pangenomes, our GPU-based solution achieves a 57.3x speedup over the state-of-the-art multithreaded CPU baseline without layout quality loss, reducing execution time from hours to minutes.
Comments: | SC 2024 |
Subjects: | Distributed, Parallel, and Cluster Computing (cs.DC); Computational Engineering, Finance, and Science (cs.CE); Data Structures and Algorithms (cs.DS) |
Cite as: | [cs.DC] |
(or [cs.DC] for this version) | |
Focus to learn more arXiv-issued DOI via DataCite (pending registration) |
Submission history
Access paper:.
- HTML (experimental)
- Other Formats
References & Citations
- Google Scholar
- Semantic Scholar
BibTeX formatted citation
Bibliographic and Citation Tools
Code, data and media associated with this article, recommenders and search tools.
- Institution
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
Top 10 Research Topics in Parallel and Distributed Computing
The specific pressure in locations of the internet with concurrent enhancement in the availability of big data with several users has to accurate the computing tasks in parallel. Parallel and distributed computing will take place in several research areas such as networks, software engineering, computer science, computer architecture, operating systems, algorithms, etc. At present, our research experts are providing complete research support and research guidance for all the research topics in parallel and distributed computing. The ideas based on an essential system of parallel and distributed computing are highlighted below shared memory models, mutual exclusion, concurrency, message passing, memory manipulation, etc.
Parallel computing is deployed for the provision of high-speed power of processing where it is required and supercomputers are the best example for parallel computing. In this process, distributed computing is accustomed when the geographical locations are differing for the computers.
- Software-defined fog node in blockchain architecture & cloud computing
- Multi clustering approach in mobile edge computing
- Distributed computing & smart city services
- Geo distributed fog computing
- Service attacks in software-defined network with cloud computing
- Distributed trust protocol for IaaS cloud computing
- Large scale convolutional neural networks
- Parallel vertex-centric algorithms
- Partitioning algorithms in mobile environments
- Configuration tuning for hierarchical cloud schedulers
- Distributed computing with delay tolerant network
We provide the research work with the implementation of research algorithms, methodologies that shape the research projects with the proper execution and appropriate code implementation.
Parallel Computing
Parallel computing delivers the simultaneous process and it is used to save both money and time. In general, the memory in a parallel system might be two-dimensional such as disseminated and collective . The processors in parallel computing have to perform numerous tasks which are assigned to the processors concurrently.
Distributed Computing
Distributed computing is entirely different from the parallel computing process because here in distributed computing a task is separated between several computers . In addition, the computers can pass the messages among them and the shared memory is not used. Several autonomous computers appear as one computer for the users.
What are the Characteristics of Parallel and Distributed Computing?
- It is based on the process of communication and location of what the node have to access to other nodes
- It is the process of detecting failures and how the system is recovered as soon as possible
- The process of computing and processing has some accumulation in several machines
- The process similar task happing in several machines at a particular time
- The frankness of software structure and its enhancement
- Distribution of hardware and software data
Parallel Computing Versus Distributed Computing
- Parallel and distributed computing are different from each other. In distributed computing, several computers are seemed together as a single system for the users to perform a single task by messages among them. In parallel computing, a single task is split into several tasks and allocated to various system
- In distributed computing, the high scalability place is required for usage. In parallel computing, the place which has high speed is preferred for the performance
- A similar master clock is used for the synchronization in parallel computing and the synchronization algorithms are used in distributed computing
- Distributed computing is used to have their memory and processers. In parallel computing, one memory is shared for all the processors
- The limited scalability is used in parallel computing and without any limitations the systems are working in addition to networks in distributed computing
- There is no dependency in distributed computing and parallel, it is fully dependent on each other because the output of one process is the input of the next task
- A single system is involved in parallel computing as multiple hosts. In distributed computing, many systems are available in the computer system
Distributed Parallel Computing
The process of distributed parallel computing system is deployed for the functions of several computers in a single network with their allocated task . In general, we are using many applications based on the distributed and parallel computing system such as
- Grid Computing
- Cloud computing
- Distributed supercomputers
- Travel reservation
- Electronic banking
- Cloud storage system
- Internet, intranet & email system
- Peer to peer network
Below, our research experts have mentioned the pioneering research topics in parallel and distributed computing , it is a significant research system and it is used to locate the various geographical locations through computers . As per the data, the research fields in parallel and distributed computing are as follows
Recent Research Areas of Parallel and Distributed Computing
- Heterogeneous computing
- Biological & molecular computing
- Supercomputing
- Computational intelligence
- Quality development using HPC
- Distributed data storage & cloud architecture
- Federated ML & shared memory
- Fault tolerance software system
- MI & AL
- Distributed grid computing
- Web technologies
- Distribution & management system in multimedia
- Mobile crowdsensing
- IoT & multi-tier computing
At present, we can see the issues from different sources in parallel and distributed computing . Thus, our research experts provide better research solutions for all such research challenges mentioned below. At this moment, let us discuss the significant research challenges in parallel and distributed computing.
Latest Research Issues of Parallel and Distributed Computing
- There are some additional functions in this process such as logging, intelligence, load balancing, monitoring, etc. All the above-mentioned functions are used for the visibility
- There is no appropriate message communication (messages are delivered wrongly to other nodes) that seems to breakdown the communication
- It is used to coordinate the sequence of changes in data and it is a complex issue in the distributed computing process and it leads the nodes to fall, stop and start
- It is the outline of multi-purpose stream processing
- It leads to let down the functions of nodes
- Functions of structural designs used in the linear process with rational quantity
- It functions as a warehouse of data in significant corporations
Thus, by solving all such research challenges in parallel and distributed computing our technical experts have shared some significant requirements of parallel and distributed computing. And, that helps the research scholars to be familiar with the most substantial real-time requirements in current research topics in parallel and distributed computing research.
Future Research Directions of Parallel and Distributed Computing Projects
- Distributed memory parallel computing
- Distinctive purpose & hybrid structural design
- Accelerators & multicore functions
- Cloud Computing
- High performance & shared memory computing
- Developing domain applications
- Structure of supercomputing & applications
The research scholars can get the best guidance for handling parallel and distributed computing tools from our research and development experts. In this regard let us see about some of the important and best-distributed computing tools below
Development Tools for Parallel and Distributed Computing Projects
- DAGH & CUDA
- ARCH & MPICH
- PPGP & PADE
- Zabbix & Nimrod
- SPRNG & Apache Hadoop
- Paralib & simGrid
- Alchemi & distributed folding GUI
To this end, we believe that you get the top to bottom way out to select the research topics in parallel and distributed computing. The above information will make you a better research scholar to precede your research in parallel and distributed computing. Yet, if you want to become an expert, then you must need a better tutor. In addition, we have several research experts for the scholar’s research assistance . We are ready to provide help and clear up all your difficulties at any stage. So, you can enrich your skills through our keen help.
Technology | Ph.D | MS | M.Tech |
---|---|---|---|
NS2 | 75 | 117 | 95 |
NS3 | 98 | 119 | 206 |
OMNET++ | 103 | 95 | 87 |
OPNET | 36 | 64 | 89 |
QULANET | 30 | 76 | 60 |
MININET | 71 | 62 | 74 |
MATLAB | 96 | 185 | 180 |
LTESIM | 38 | 32 | 16 |
COOJA SIMULATOR | 35 | 67 | 28 |
CONTIKI OS | 42 | 36 | 29 |
GNS3 | 35 | 89 | 14 |
NETSIM | 35 | 11 | 21 |
EVE-NG | 4 | 8 | 9 |
TRANS | 9 | 5 | 4 |
PEERSIM | 8 | 8 | 12 |
GLOMOSIM | 6 | 10 | 6 |
RTOOL | 13 | 15 | 8 |
KATHARA SHADOW | 9 | 8 | 9 |
VNX and VNUML | 8 | 7 | 8 |
WISTAR | 9 | 9 | 8 |
CNET | 6 | 8 | 4 |
ESCAPE | 8 | 7 | 9 |
NETMIRAGE | 7 | 11 | 7 |
BOSON NETSIM | 6 | 8 | 9 |
VIRL | 9 | 9 | 8 |
CISCO PACKET TRACER | 7 | 7 | 10 |
SWAN | 9 | 19 | 5 |
JAVASIM | 40 | 68 | 69 |
SSFNET | 7 | 9 | 8 |
TOSSIM | 5 | 7 | 4 |
PSIM | 7 | 8 | 6 |
PETRI NET | 4 | 6 | 4 |
ONESIM | 5 | 10 | 5 |
OPTISYSTEM | 32 | 64 | 24 |
DIVERT | 4 | 9 | 8 |
TINY OS | 19 | 27 | 17 |
TRANS | 7 | 8 | 6 |
OPENPANA | 8 | 9 | 9 |
SECURE CRT | 7 | 8 | 7 |
EXTENDSIM | 6 | 7 | 5 |
CONSELF | 7 | 19 | 6 |
ARENA | 5 | 12 | 9 |
VENSIM | 8 | 10 | 7 |
MARIONNET | 5 | 7 | 9 |
NETKIT | 6 | 8 | 7 |
GEOIP | 9 | 17 | 8 |
REAL | 7 | 5 | 5 |
NEST | 5 | 10 | 9 |
PTOLEMY | 7 | 8 | 4 |
Related Pages
YouTube Channel
Unlimited Network Simulation Results available here.
Game Programming Meets Parallel Computing: Teaching Parallel Computing Concepts to Game Programmers
- Conference paper
- First Online: 03 September 2024
- Cite this conference paper
- Neil Patrick Del Gallego 40 , 41
Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 1199))
Included in the following conference series:
- International Conference on Advances in Computational Science and Engineering
With the rapid advancement of the gaming industry and the demand to create high-performance games, game programmers must be well-versed in concurrent programming and parallel computing concepts. This paper presents a set of project-based learning activities for teaching parallel computing concepts to game programmers. We propose four project-based activities: (1) asynchronous image loader, (2) multi-threaded ray tracer, (3) interactive loading screen, and (4) interactive 3D model viewer. Our projects require computer graphics and game programming concepts, differentiating our projects from other parallel computing course assignments. Such projects may be of interest to undergraduate students who wish to pursue game programming in the future. Similarly, the projects may be taught to game programmers who have yet to learn about parallel computing. A pilot course delivery was conducted at De La Salle University, where 39 students have enrolled. Students have generally achieved high scores ( \(>80\%\) ), implying the effectiveness of the learning content in developing the projects.
The authors would like to give thanks to the DLSU students for giving valuable feedback about the course. The authors would like to acknowledge De La Salle University and DLSU Science Foundation for funding this research.
This is a preview of subscription content, log in via an institution to check access.
Access this chapter
Subscribe and save.
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
- Available as PDF
- Read on any device
- Instant download
- Own it forever
- Available as EPUB and PDF
- Durable hardcover edition
- Free shipping worldwide - see info
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
Akenine-Moller T, Haines E, Hoffman N (2019) Real-time rendering. AK Peters/CRC Press
Google Scholar
Pharr M, Fernando R (2005) GPU Gems 2: programming techniques for high-performance graphics and general-purpose computation (GPU gems). Addison-Wesley Professional
Pharr M, Jakob W, Humphreys G (2023) Physically based rendering: from theory to implementation. MIT Press
Unterguggenberger J, Kerbl B, Wimmer M (2023) Vulkan all the way: transitioning to a modern low-level graphics API in academia. Comput Graphics 111:155–165
Article Google Scholar
Yazici A, Mishra A, Karakaya Z (2016) Teaching parallel computing concepts using real-life applications. Int J Eng Educ 32(2):772–781
Bogaerts SA (2017) One step at a time: parallelism in an introductory programming course. J Parallel Distrib Comput 105:4–17
Grossman M, Aziz M, Chi H, Tibrewal A, Imam S, Sarkar V (2017) Pedagogy and tools for teaching parallel computing at the sophomore undergraduate level. J Parallel Distrib Comput 105:18–30
Imam S, Sarkar V (2014) Habanero-java library: a java 8 framework for multicore programming. In: Proceedings of the 2014 international conference on principles and practices of programming on the java platform: virtual machines, languages, and tools, pp 75–86
Chesebrough RA, Turner I (2010) Parallel computing: at the interface of high school and industry. In: Proceedings of the 41st ACM technical symposium on computer science education, pp 280–284
Joiner DA, Gray P, Murphy T, Peck C (2006) Teaching parallel computing to science faculty: best practices and common pitfalls. In: Proceedings of the eleventh ACM SIGPLAN symposium on principles and practice of parallel programming, pp 239–246
Alderfer KB, Smith BK, Ontañón S, Char B, Nebolsky J, Zhu J, Furqan A, Freed E, Patterson J, Valls-Vargas J (2018) Lessons learned from an interactive educational computer game about concurrent programming. In: Proceedings of the 49th ACM technical symposium on computer science education, p 1077
Kantharaju P, Alderfer K, Zhu J, Char B, Smith B, Ontanon S (2020) Modeling player knowledge in a parallel programming educational game. IEEE Trans Games 14(1):64–75
Zhu J, Alderfer K, Furqan A, Nebolsky J, Char B, Smith B, Villareale J, Ontañón S (2019) Programming in game space: how to represent parallel programming concepts in an educational game. In: Proceedings of the 14th international conference on the foundations of digital games, pp 1–10
Zhu J, Alderfer K, Smith B, Char B, Ontañón S (2020) Understanding learners’ problem-solving strategies in concurrent and parallel programming: a game-based approach. arXiv preprint arXiv:2005.04789
The Khronos Group: Vukan rendering loop (2020) https://vkguide.dev/docs/chapter-1/vulkan_mainloop/ . Accessed 05 Nov 2023
Gregory J (2018) Game engine architecture. CRC Press
Toftedahl M, Engström H (2019) A taxonomy of game engines and the tools that drive the industry. In: DiGRA 2019, The 12th digital games research association conference, Kyoto, Japan, August, 6–10. Digital Games Research Association (DiGRA)
Huo Y, Yang A, Jia Q, Chen Y, He B, Li J (2021) Efficient visualization of large-scale oblique photogrammetry models in unreal engine. ISPRS Int J Geo Inf 10(10):643
Lee A, Chang YS, Jang I (2020) Planetary-scale geospatial open platform based on the unity3d environment. Sensors 20(20):5967
Gomila L(2023) Sfml: simple and fast multimedia library. https://www.sfml-dev.org/index.php . Accessed 11 Apr 2023
Lantinga S (2021) Sdl: Simple directmedia layer. https://www.libsdl.org/ . Accessed 04 Nov 2023
Gebali F (2011) Algorithms and parallel computing, vol 82. Wiley
Herlihy M, Shavit N, Luchangco V, Spear M (2020) The art of multiprocessor programming. Newnes
Raynal M (2012) Concurrent programming: algorithms, principles, and foundations. Springer Science and Business Media
Downey AB (2005) The little book of semaphores
Garg RP, Sharapov IA, Sharapov I (2002) Techniques for optimizing applications: high performance computing. Sun Microsystems Press Palo Alto
Glassner AS (1989) An introduction to ray tracing. Morgan Kaufmann
Shirley P, Morley RK (2008) Realistic ray tracing. AK Peters, Ltd
Bigler J, Stephens A, Parker SG (2006) Design for parallel interactive ray tracing systems. In: 2006 IEEE symposium on interactive ray tracing. IEEE, pp 187–196
Chalmers A, Reinhard E, Davis T (2002) Practical parallel rendering. CRC Press
Meister D, Boksansky J, Guthe M, Bittner J (2020) On ray reordering techniques for faster GPU ray tracing. In: Symposium on interactive 3D graphics and games, pp 1–9
Yan R, Huang L, Guo H, Lü Y, Yang L, Xiao N, Wang Y, Shen L, Lan M (2022) Rt engine: an efficient hardware architecture for ray tracing. Appl Sci 12(19):9599
Shirley P, Black TD, Hollasch S (2023) Ray tracing in one weekend. https://raytracing.github.io/books/RayTracingInOneWeekend.html . Accessed 05 Nov 2023
Muratori C (2005) Immediate-mode graphical user interfaces. https://caseymuratori.com/blog_0001 . Accessed 05 Nov 2023
Cornut O (2022) Dear ImGui library. https://github.com/ocornut/imgui . Accessed 27 July 2023
Wu Z, Song S, Khosla A, Yu F, Zhang L, Tang X, Xiao J (2015) 3d shapenets: a deep representation for volumetric shapes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1912–1920
Download references
Author information
Authors and affiliations.
De La Salle University, 2401 Taft Ave, Malate, 1004, Manila, Metro Manila, Philippines
Neil Patrick Del Gallego
Graphics, Animation, Multimedia, and Entertainment (GAME) Lab, De La Salle University, 2401 Taft Ave, Malate, 1004, Manila, Metro Manila, Philippines
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Neil Patrick Del Gallego .
Editor information
Editors and affiliations.
Technology Park Malaysia, Asia Pacific University of Technology and Innovation, Kuala Lumpur, Malaysia
Vinesh Thiruchelvam
Faculty of Computing and Informatics, Creative Advanced Machine Intelligence Research Centre, Universiti Malaysia Sabah, Kota Kinabalu, Malaysia
Rayner Alfred
Higher Colleges of Technology, Abu Dhabi, Abu Dhabi, United Arab Emirates
Zamhar Iswandono Bin Awang Ismail
Department of Informatics, Mulawarman University, Samarinda, Indonesia
Haviluddin Haviluddin
School of Engineering and Technology, Sunway University, Petaling Jaya, Selangor, Malaysia
Aslina Baharum
Rights and permissions
Reprints and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper.
Del Gallego, N.P. (2024). Game Programming Meets Parallel Computing: Teaching Parallel Computing Concepts to Game Programmers. In: Thiruchelvam, V., Alfred, R., Ismail, Z.I.B.A., Haviluddin, H., Baharum, A. (eds) Proceedings of the 4th International Conference on Advances in Computational Science and Engineering. ICACSE 2023. Lecture Notes in Electrical Engineering, vol 1199. Springer, Singapore. https://doi.org/10.1007/978-981-97-2977-7_2
Download citation
DOI : https://doi.org/10.1007/978-981-97-2977-7_2
Published : 03 September 2024
Publisher Name : Springer, Singapore
Print ISBN : 978-981-97-2976-0
Online ISBN : 978-981-97-2977-7
eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Publish with us
Policies and ethics
- Find a journal
- Track your research
6th Workshop on Education for High Performance Computing (EduHiPC 2024) 18 December 2024, Bengaluru, India In conjunction with the 31st IEEE International Conference on High-Performance Computing, Data, & Analytics ( HiPC 2024 ) Call for Paper Submission High Performance Computing (HPC) and, in general, Parallel and Distributed Computing (PDC) is ubiquitous. Every computing device, from a smartphone to a supercomputer, relies on parallel processing. Compute clusters of multicore and manycore processors (CPUs and GPUs) are routinely used in many subdomains of computer science (CS) such as computer vision, data science, parallel machine learning and high performance computing. Therefore, it is important for every programmer, software professional and CS researchers to understand how parallelism and distributed computing affect problem solving. It is essential for educators to impart a range of PDC and HPC skills and knowledge at multiple levels within the curriculum of Computer Science (CS), Computer Engineering (CE), and related disciplines such as computational data science. Software industry and research laboratories require people with these skills, more so now. Thus, they now engage in extensive on-the-job training. Additionally, rapid changes in hardware platforms, languages, and programming environments increasingly challenge educators to decide what to teach and how to teach, in order to prepare students for careers that involve PDC and HPC. EduHiPC aims to provide a forum that brings together academia, industry, government, and non-profit organizations – especially from India, its vicinity, and Asia – for exploring and exchanging experiences and ideas about the inclusion of high-performance, parallel, and distributed computing into undergraduate and graduate curriculum of Computer Science, Computer Engineering, Computational Science, Computational Engineering, and computational courses for STEM and business and other non-STEM disciplines. The 6th EduHiPC (EduHiPC 2024) workshop invites unpublished manuscripts from individuals or teams from academia, industry, and other educational and research institutes from all over the world on topics about the teaching of PDC topics in the Computer Science and Computer Engineering curriculum as well as in domain-specific computational and data science and engineering curricula. EduHiPC invites researchers, scholars, and practitioners to submit their work for consideration in either of the following two paper tracks or for posters or peachy assignments sessions. Additionally, we encourage manuscripts that validate their innovative approaches through the systematic collection and analysis of information to evaluate their performance and impact. The workshop is particularly dedicated to bringing together stakeholders from industry (hardware vendors, and research and development organizations), government labs, and academia in the context of HiPC 2024. The goal of the workshop is to hear the challenges faced by educators and professionals, to learn about various approaches to addressing these challenges, and to have opportunities to exchange ideas and solutions. We also encourage submissions related to the challenges in imparting education during this recent global pandemic and online evaluation mechanisms for PDC/HPC. This effort is in coordination with the Center for Parallel and Distributed Computing Curriculum Development and Educational Resources (CDER) . Topics of interest include, but are not limited to:
SUBMISSION GUIDELINES Authors should submit papers in PDF format through the submission site ( https://easychair.org/conferences/?conf=eduhipc2024 ) We are accepting submissions for Track 1 Full Papers (6-8 pages) , Track 2 Short Papers (3-4 pages) , Posters (2-page abstracts) , and Peachy Parallel Assignments (2-page abstracts) . Please see the details below for each category of submission. All entries must be submitted via the submission site ( https://easychair.org/conferences/?conf=eduhipc2024 ). Ensure that submissions adhere to the IEEE format https://www.ieee.org/conferences/publishing/templates.html), featuring single-spaced, double-column pages with proper inclusion of figures, tables, and references. Accepted regular and short papers will be published in the workshop proceedings and included in the IEEE Xplore digital library, and authors will present their work in a technical workshop session. Authors of accepted Posters and Peachy Assignments will present their work during the workshop poster sessions. Summary papers of all accepted posters and all accepted Peachy Assignments will also be published in the workshop proceedings. Proceedings of the workshops are distributed at the conference and will be included in the IEEE Xplore Digital Library after the conference. Summary papers will be written by the Poster and Peachy Assignment chairs and will include, as co-authors, all Poster and Peachy Assignment authors. In addition, all individual abstracts, posters, and preprints of papers will be published on the CDER website. Papers: Authors are asked to submit 6-8 page papers in pdf format for Track 1 and 3-4 page papers in pdf format for Track 2. Submissions will be reviewed based on the novelty of contributions, impact on the broader undergraduate curriculum, particularly on the core curriculum, relevance to the workshop’s goals, and, for experience papers, the results of their evaluation and the evaluation methodology. Posters: High-quality poster presentations are an integral part of EduHiPC. We seek posters (2-page abstracts) describing recent or ongoing research in PDC Education. Peachy Parallel Assignments: Course assignments are integral to student learning and also play an important role in student perceptions of the field. EduHiPC will include a session showcasing “Peachy Parallel Assignments” – high-quality assignments, previously tested in class, that are readily adoptable by other educators teaching topics in parallel and distributed computing. Assignments may be previously published, but the author must have the right to publish a description of it and share all supporting materials. We are seeking assignments that are:
Assignments can cover any topics in Parallel and Distributed Computing. Preference will be given to assignments aimed at students in the early courses. Submissions (2-page abstracts) should describe the assignment and its contextual usage and include a link to a web page containing the complete set of files given to students (assignment description, supporting code, etc.). The document should cover the following items: What is the main idea of the assignment? What concepts are covered? Who are its targeted students? In what context have you used it? What prerequisite material does it assume they have seen? What are its strengths and weaknesses? Are there any variations that may be of interest? Authors of papers accepted as poster papers will be invited to revise their papers in a 2-page format. Authors of all accepted full and short papers must be present at the workshop. IMPORTANT DATES - EduHiPC2024 Workshop Submission site open: September 1, 2024 Abstract Submission Deadline: September 16, 2024 (Encouraged) Full Paper submissions Deadline: September 23, 2024 Author notifications Deadline: October 20, 2024 Camera-ready Deadline: November 6, 2024 All deadlines are at 11:59 PM AoE (UTC-12). Registration Fees for Accepted Papers At least one author of an accepted paper must register and present the paper as per HiPC2024 Registration Rates. ORGANIZATION COMMITTEE Sushil Prasad, University of Texas, San Antonio, USA Sheikh Ghafoor, Tennessee Tech University, USA Alan Sussman, National Science Foundation & University of Maryland, USA Ramachandran Vaidyanathan, Louisiana State University, USA Charles Weems, University of Massachusetts, USA Ashish Kuvelkar, C-DAC, India Sharad Sinha, IIT Goa, India Neelima Bayyapu, MIT, Manipal, India Workshop Co-Chairs Sushil K. Prasad, University of Texas San Antonio, USA, [email protected] Ashish Kuvelkar, C-DAC, India, [email protected] Program Co-Chairs Sharad Sinha, IIT Goa, India, [email protected] Neelima Bayyapu, MIT, Manipal, India, [email protected] |
IMAGES
VIDEO
COMMENTS
The publishes original research papers and timely review articles on the theory, design, evaluation, and use of parallel and/or distributed computing systems. The journal also features special issues on these topics; again covering the full range from the design to the use of our targeted systems. Research Areas Include:
Parallel and Distributed Computing is at the center of this progress in that it aggregates multiple computational resources, such as CPU cores and machines, ... This section briefly summarizes top-level recommendations for research in parallel and distributed computing. Subsequent sections of the report provide more detail on specific research ...
The selected papers of this special issue cover a variety of interesting topics reflecting some recent developments in theoretical and practical research in both core and interdisciplinary areas of parallel and distributed computing, applications, and technologies.
From our company's beginning, Google has had to deal with both issues in our pursuit of organizing the world's information and making it universally accessible and useful. We continue to face many exciting distributed systems and parallel computing challenges in areas such as concurrency control, fault tolerance, algorithmic efficiency, and ...
Y. Song, T. Wo, R. Yang et al. Journal of Parallel and Distributed Computing 157 (2021) 168-178 Fig. 1. Illustration of the unreliable network environment consisting of remote servers and edge nodes. We devise two distinct caching solutions to balance the computa-tion complexity and caching effectiveness and satisfy the diverse
This special issue invites research manuscripts extended from those presented at IEEE International Confe-rence on High Performance Computing, Data, & Analytics (HiPC2020) which was held virtually, December 16--18, 2020. The accepted papers will cover traditional areas of the high-performance computing, data science and analytics domains as well as emerging topics in these domains.
The Journal of Parallel and Distributed Computing publishes original research papers and timely review articles on the theory, design, evaluation, and use of parallel and/or distributed computing systems. The journal also features special issues on these topics; again covering the full range from the design to the use of our targeted systems ...
Feature papers represent the most advanced research with significant potential for high impact in the field. ... It is an undeniable fact that parallel and distributed computing is ubiquitous now in nearly all computational scenarios ranging from mainstream computing to high-performance and/or distributed architectures such as cloud ...
The Journal of Parallel and Distributed Computing publishes original research papers and timely review articles on the theory, design, evaluation, and use of parallel and/or distributed computing systems. The journal also features special issues on these topics; again covering the full range from the design to the use of our targeted systems ...
All manuscripts submission and review will be handled by Elsevier Editorial System Submission site for Journal of Parallel and Distributed Computing. All papers should be prepared according to Guide for Authors - Journal of Parallel and Distributed Computing . Manuscripts should be no longer than 40 double-spaced pages, not including the title ...
Distributed systems and calculations being carried out in parallel | Explore the latest full-text research PDFs, articles, conference papers, preprints and more on PARALLEL AND DISTRIBUTED COMPUTING.
X. Xu, F. Wang, H. Jiang et al. Journal of Parallel and Distributed Computing 172 (2023) 51-68 Note that this paper is based on our prior work presented at the 2019 IEEE/ACM International Symposium on Quality of Service (IWQoS'19) [34]. We briefly provide the new contents beyond the prior conference version as follows.
THE research domains of parallel and distributed computing have a significant overlap. With the advent of general-purpose multiprocessors, this overlap is bound to increase. This Special Issue attempts to draw together several papers from both of these separate research domains to illustrate commonalty and to encourage greater interaction among researchers in the two communities.
MPI+X has been the de facto standard for distributed memory parallel programming. It is widely used primarily as an explicit two-sided communication model, which often leads to complex and error-prone code. Alternatively, PGAS model utilizes efficient one-sided communication and more intuitive communication primitives. In this paper, we present a novel approach that integrates PGAS concepts ...
racy is lower than 80% when we scale the batch size to 8K. After a comprehensive tuning of the learning rate and warmup, we observe that the Sync method's accuracy is slightly lower than the EA. wild asynchronous method for batch s. ze = 8K (Figure 2.4). The Sync method uses eight machines. The Async EA-wil.
The differences in calculation distribution and parallel computing, along with terminology, assignment of tasks, performance parameters, benefits and range of distributed computing, and parallel ...
A viable approach for building large-scale quantum computers is to interlink small-scale quantum computers with a quantum network to create a larger distributed quantum computer. When designing quantum algorithms for such a distributed quantum computer, one can make use of the added parallelization and distribution abilities inherent in the system. An added difficulty to then overcome for ...
In this paper, we present a general survey on parallel computing. The main contents include parallel computer system which is the hardware platform of parallel computing, parallel algorithm which ...
The Journal of Parallel and Distributed Computing publishes original research papers and timely review articles on the theory, design, evaluation, and use of parallel and/or distributed computing systems. The journal also features special issues on these topics; again covering the full range from the design to the use of our targeted systems ...
short-paper. Free access. Share on. Fast, Accurate and Distributed Simulation of novel HPC systems incorporating ARM and RISC-V CPUs ... HPDC '24: Proceedings of the 33rd International Symposium on High-Performance Parallel and Distributed Computing. June 2024. 436 pages. ISBN: 9798400704130. DOI: 10.1145/3625549. Chair: Patrizio Dazzi, Co ...
We present and analyze Graph_Disperse_BFS (Algorithm 4), which is a BFS-based algorithm that solves Dispersion of k ≤ n robots on an arbitrary n-node graph in O((D k) (D )) time with. + + algorithm has lower run-time than the O( O(log D logk) bits of memory at each robot using the global communication model. This. +.
This research full paper identifies how the teaching of parallel computing has been developing over the years. The learning of parallel and distributed computing is fundamental for computing professionals, due to the popularization of parallel architectures. Teaching parallel computing involves theoretical concepts and the development of practical skills. Its content is dense and comprises ...
Instead, there will be required readings from on-line resources posted to the class schedule. In addition, we will also read and discuss one or two research papers most weeks. Research paper assignments will be posted to the class schedule and also be listed off the paper reading schedule (available week 2). The paper assigments will be updated ...
A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the ...
Journal of Parallel and Distributed Computing. Supports open access. 10.3 CiteScore. 3.4 Impact Factor. Articles & Issues. About. Publish. Order journal. Menu. Articles & Issues. Latest issue; ... Research article Full text access TERMS: Task management policies to achieve high performance for mixed workloads using surplus resources. Jinyu Yu ...
Computer Science > Distributed, Parallel, and Cluster Computing. arXiv:2409.00876 (cs) [Submitted on 2 Sep 2024] ... Pjotr Prins, Erik Garrison, Zhiru Zhang. View a PDF of the paper titled Rapid GPU-Based Pangenome Graph Layout, by Jiajie Li and 8 other authors. View PDF HTML (experimental)
The specific pressure in locations of the internet with concurrent enhancement in the availability of big data with several users has to accurate the computing tasks in parallel. Parallel and distributed computing will take place in several research areas such as networks, software engineering, computer science, computer architecture, operating systems, algorithms, etc.
With the rapid advancement of the gaming industry and the demand to create high-performance games, game programmers must be well-versed in concurrent programming and parallel computing concepts. This paper presents a set of project-based learning activities for teaching parallel computing concepts to game programmers.
In conjunction with the 31st IEEE International Conference on High-Performance Computing, Data, & Analytics . Call for Paper Submission. High Performance Computing (HPC) and, in general, Parallel and Distributed Computing (PDC) is ubiquitous. Every computing device, from a smartphone to a supercomputer, relies on parallel processing.