Distributed Systems and Parallel Computing

No matter how powerful individual computers become, there are still reasons to harness the power of multiple computational units, often spread across large geographic areas. Sometimes this is motivated by the need to collect data from widely dispersed locations (e.g., web pages from servers, or sensors for weather or traffic). Other times it is motivated by the need to perform enormous computations that simply cannot be done by a single CPU.

From our company’s beginning, Google has had to deal with both issues in our pursuit of organizing the world’s information and making it universally accessible and useful. We continue to face many exciting distributed systems and parallel computing challenges in areas such as concurrency control, fault tolerance, algorithmic efficiency, and communication. Some of our research involves answering fundamental theoretical questions, while other researchers and engineers are engaged in the construction of systems to operate at the largest possible scale, thanks to our hybrid research model .

Recent Publications

Some of our teams.

Algorithms & optimization

Graph mining

Network infrastructure

System performance

We're always looking for more talented, passionate people.

Careers

Journal of Parallel and Distributed Computing

parallel and distributed computing research paper

Subject Area and Category

  • Artificial Intelligence
  • Computer Networks and Communications
  • Hardware and Architecture
  • Theoretical Computer Science

Academic Press Inc.

Publication type

07437315, 10960848

Information

How to publish in this journal

parallel and distributed computing research paper

The set of journals have been ranked according to their SJR and divided into four equal groups, four quartiles. Q1 (green) comprises the quarter of the journals with the highest values, Q2 (yellow) the second highest values, Q3 (orange) the third highest values and Q4 (red) the lowest values.

CategoryYearQuartile
Artificial Intelligence1999Q2
Artificial Intelligence2000Q2
Artificial Intelligence2001Q3
Artificial Intelligence2002Q3
Artificial Intelligence2003Q3
Artificial Intelligence2004Q2
Artificial Intelligence2005Q2
Artificial Intelligence2006Q2
Artificial Intelligence2007Q2
Artificial Intelligence2008Q2
Artificial Intelligence2009Q2
Artificial Intelligence2010Q2
Artificial Intelligence2011Q2
Artificial Intelligence2012Q3
Artificial Intelligence2013Q3
Artificial Intelligence2014Q2
Artificial Intelligence2015Q2
Artificial Intelligence2016Q2
Artificial Intelligence2017Q2
Artificial Intelligence2018Q2
Artificial Intelligence2019Q2
Artificial Intelligence2020Q2
Artificial Intelligence2021Q1
Artificial Intelligence2022Q2
Artificial Intelligence2023Q2
Computer Networks and Communications1999Q2
Computer Networks and Communications2000Q2
Computer Networks and Communications2001Q2
Computer Networks and Communications2002Q3
Computer Networks and Communications2003Q2
Computer Networks and Communications2004Q2
Computer Networks and Communications2005Q2
Computer Networks and Communications2006Q2
Computer Networks and Communications2007Q2
Computer Networks and Communications2008Q2
Computer Networks and Communications2009Q2
Computer Networks and Communications2010Q2
Computer Networks and Communications2011Q2
Computer Networks and Communications2012Q2
Computer Networks and Communications2013Q2
Computer Networks and Communications2014Q2
Computer Networks and Communications2015Q1
Computer Networks and Communications2016Q1
Computer Networks and Communications2017Q2
Computer Networks and Communications2018Q2
Computer Networks and Communications2019Q2
Computer Networks and Communications2020Q1
Computer Networks and Communications2021Q1
Computer Networks and Communications2022Q1
Computer Networks and Communications2023Q1
Hardware and Architecture1999Q2
Hardware and Architecture2000Q2
Hardware and Architecture2001Q2
Hardware and Architecture2002Q3
Hardware and Architecture2003Q2
Hardware and Architecture2004Q2
Hardware and Architecture2005Q2
Hardware and Architecture2006Q2
Hardware and Architecture2007Q2
Hardware and Architecture2008Q2
Hardware and Architecture2009Q2
Hardware and Architecture2010Q2
Hardware and Architecture2011Q2
Hardware and Architecture2012Q2
Hardware and Architecture2013Q2
Hardware and Architecture2014Q2
Hardware and Architecture2015Q1
Hardware and Architecture2016Q1
Hardware and Architecture2017Q1
Hardware and Architecture2018Q2
Hardware and Architecture2019Q2
Hardware and Architecture2020Q1
Hardware and Architecture2021Q1
Hardware and Architecture2022Q1
Hardware and Architecture2023Q1
Software1999Q2
Software2000Q2
Software2001Q2
Software2002Q3
Software2003Q2
Software2004Q2
Software2005Q2
Software2006Q2
Software2007Q3
Software2008Q2
Software2009Q2
Software2010Q2
Software2011Q2
Software2012Q2
Software2013Q2
Software2014Q2
Software2015Q2
Software2016Q2
Software2017Q2
Software2018Q2
Software2019Q2
Software2020Q2
Software2021Q1
Software2022Q1
Software2023Q1
Theoretical Computer Science1999Q2
Theoretical Computer Science2000Q2
Theoretical Computer Science2001Q2
Theoretical Computer Science2002Q4
Theoretical Computer Science2003Q3
Theoretical Computer Science2004Q2
Theoretical Computer Science2005Q3
Theoretical Computer Science2006Q2
Theoretical Computer Science2007Q3
Theoretical Computer Science2008Q2
Theoretical Computer Science2009Q3
Theoretical Computer Science2010Q3
Theoretical Computer Science2011Q3
Theoretical Computer Science2012Q3
Theoretical Computer Science2013Q3
Theoretical Computer Science2014Q3
Theoretical Computer Science2015Q2
Theoretical Computer Science2016Q2
Theoretical Computer Science2017Q2
Theoretical Computer Science2018Q3
Theoretical Computer Science2019Q3
Theoretical Computer Science2020Q2
Theoretical Computer Science2021Q1
Theoretical Computer Science2022Q1
Theoretical Computer Science2023Q1

The SJR is a size-independent prestige indicator that ranks journals by their 'average prestige per article'. It is based on the idea that 'all citations are not created equal'. SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from It measures the scientific influence of the average article in a journal, it expresses how central to the global scientific discussion an average article of the journal is.

YearSJR
19990.358
20000.436
20010.444
20020.312
20030.459
20040.499
20050.489
20060.490
20070.442
20080.586
20090.489
20100.509
20110.485
20120.397
20130.437
20140.548
20150.614
20160.597
20170.502
20180.417
20190.525
20200.638
20211.289
20221.158
20231.187

Evolution of the number of published documents. All types of documents are considered, including citable and non citable documents.

YearDocuments
199971
200067
200191
200284
2003102
2004103
2005124
2006124
200792
2008119
200989
2010104
2011135
2012145
2013143
2014123
201597
201686
2017171
2018207
2019214
2020170
2021175
2022155
2023125

This indicator counts the number of citations received by documents from a journal and divides them by the total number of documents published in that journal. The chart shows the evolution of the average number of times documents published in a journal in the past two, three and four years have been cited in the current year. The two years line is equivalent to journal impact factor ™ (Thomson Reuters) metric.

Cites per documentYearValue
Cites / Doc. (4 years)19990.890
Cites / Doc. (4 years)20001.014
Cites / Doc. (4 years)20010.997
Cites / Doc. (4 years)20020.854
Cites / Doc. (4 years)20031.058
Cites / Doc. (4 years)20041.535
Cites / Doc. (4 years)20051.905
Cites / Doc. (4 years)20061.685
Cites / Doc. (4 years)20071.704
Cites / Doc. (4 years)20081.603
Cites / Doc. (4 years)20091.845
Cites / Doc. (4 years)20102.179
Cites / Doc. (4 years)20112.495
Cites / Doc. (4 years)20122.244
Cites / Doc. (4 years)20132.228
Cites / Doc. (4 years)20142.624
Cites / Doc. (4 years)20152.484
Cites / Doc. (4 years)20162.868
Cites / Doc. (4 years)20173.131
Cites / Doc. (4 years)20182.994
Cites / Doc. (4 years)20193.198
Cites / Doc. (4 years)20203.729
Cites / Doc. (4 years)20214.349
Cites / Doc. (4 years)20224.872
Cites / Doc. (4 years)20235.183
Cites / Doc. (3 years)19990.890
Cites / Doc. (3 years)20001.116
Cites / Doc. (3 years)20010.825
Cites / Doc. (3 years)20020.777
Cites / Doc. (3 years)20031.066
Cites / Doc. (3 years)20041.552
Cites / Doc. (3 years)20051.869
Cites / Doc. (3 years)20061.775
Cites / Doc. (3 years)20071.390
Cites / Doc. (3 years)20081.591
Cites / Doc. (3 years)20091.910
Cites / Doc. (3 years)20102.397
Cites / Doc. (3 years)20112.429
Cites / Doc. (3 years)20122.079
Cites / Doc. (3 years)20132.313
Cites / Doc. (3 years)20142.461
Cites / Doc. (3 years)20152.455
Cites / Doc. (3 years)20162.981
Cites / Doc. (3 years)20173.271
Cites / Doc. (3 years)20182.669
Cites / Doc. (3 years)20193.134
Cites / Doc. (3 years)20203.939
Cites / Doc. (3 years)20214.997
Cites / Doc. (3 years)20225.211
Cites / Doc. (3 years)20235.174
Cites / Doc. (2 years)19990.887
Cites / Doc. (2 years)20000.907
Cites / Doc. (2 years)20010.746
Cites / Doc. (2 years)20020.753
Cites / Doc. (2 years)20031.074
Cites / Doc. (2 years)20041.575
Cites / Doc. (2 years)20051.971
Cites / Doc. (2 years)20061.256
Cites / Doc. (2 years)20071.222
Cites / Doc. (2 years)20081.713
Cites / Doc. (2 years)20092.085
Cites / Doc. (2 years)20102.264
Cites / Doc. (2 years)20111.948
Cites / Doc. (2 years)20122.113
Cites / Doc. (2 years)20132.079
Cites / Doc. (2 years)20142.351
Cites / Doc. (2 years)20152.466
Cites / Doc. (2 years)20162.882
Cites / Doc. (2 years)20172.863
Cites / Doc. (2 years)20182.261
Cites / Doc. (2 years)20193.204
Cites / Doc. (2 years)20204.468
Cites / Doc. (2 years)20215.401
Cites / Doc. (2 years)20224.965
Cites / Doc. (2 years)20234.736

Evolution of the total number of citations and journal's self-citations received by a journal's published documents during the three previous years. Journal Self-citation is defined as the number of citation from a journal citing article to articles published by the same journal.

CitesYearValue
Self Cites199917
Self Cites200013
Self Cites20019
Self Cites20024
Self Cites200311
Self Cites200413
Self Cites200520
Self Cites200618
Self Cites200710
Self Cites200830
Self Cites200914
Self Cites201018
Self Cites201119
Self Cites201223
Self Cites201341
Self Cites201435
Self Cites201523
Self Cites201624
Self Cites201735
Self Cites201842
Self Cites201974
Self Cites202064
Self Cites202166
Self Cites202237
Self Cites202371
Total Cites1999325
Total Cites2000317
Total Cites2001179
Total Cites2002178
Total Cites2003258
Total Cites2004430
Total Cites2005540
Total Cites2006584
Total Cites2007488
Total Cites2008541
Total Cites2009640
Total Cites2010719
Total Cites2011758
Total Cites2012682
Total Cites2013888
Total Cites20141041
Total Cites20151009
Total Cites20161082
Total Cites20171001
Total Cites2018945
Total Cites20191454
Total Cites20202332
Total Cites20212953
Total Cites20222913
Total Cites20232587

Evolution of the number of total citation per document and external citation per document (i.e. journal self-citations removed) received by a journal's published documents during the three previous years. External citations are calculated by subtracting the number of self-citations from the total number of citations received by the journal’s documents.

CitesYearValue
External Cites per document19990.844
External Cites per document20001.070
External Cites per document20010.783
External Cites per document20020.760
External Cites per document20031.021
External Cites per document20041.505
External Cites per document20051.799
External Cites per document20061.720
External Cites per document20071.362
External Cites per document20081.503
External Cites per document20091.869
External Cites per document20102.337
External Cites per document20112.369
External Cites per document20122.009
External Cites per document20132.206
External Cites per document20142.378
External Cites per document20152.399
External Cites per document20162.915
External Cites per document20173.157
External Cites per document20182.551
External Cites per document20192.974
External Cites per document20203.831
External Cites per document20214.885
External Cites per document20225.145
External Cites per document20235.032
Cites per document19990.890
Cites per document20001.116
Cites per document20010.825
Cites per document20020.777
Cites per document20031.066
Cites per document20041.552
Cites per document20051.869
Cites per document20061.775
Cites per document20071.390
Cites per document20081.591
Cites per document20091.910
Cites per document20102.397
Cites per document20112.429
Cites per document20122.079
Cites per document20132.313
Cites per document20142.461
Cites per document20152.455
Cites per document20162.981
Cites per document20173.271
Cites per document20182.669
Cites per document20193.134
Cites per document20203.939
Cites per document20214.997
Cites per document20225.211
Cites per document20235.174

International Collaboration accounts for the articles that have been produced by researchers from several countries. The chart shows the ratio of a journal's documents signed by researchers from more than one country; that is including more than one country address.

YearInternational Collaboration
199923.94
200026.87
200118.68
200227.38
200326.47
200425.24
200526.61
200628.23
200727.17
200825.21
200925.84
201032.69
201126.67
201229.66
201330.77
201430.08
201537.11
201629.07
201732.75
201843.96
201937.85
202048.24
202135.43
202238.06
202338.40

Not every article in a journal is considered primary research and therefore "citable", this chart shows the ratio of a journal's articles including substantial research (research articles, conference papers and reviews) in three year windows vs. those documents other than research articles, reviews and conference papers.

DocumentsYearValue
Non-citable documents19992
Non-citable documents20001
Non-citable documents20010
Non-citable documents20025
Non-citable documents20038
Non-citable documents200414
Non-citable documents20059
Non-citable documents20068
Non-citable documents20078
Non-citable documents200810
Non-citable documents200911
Non-citable documents20107
Non-citable documents20116
Non-citable documents20127
Non-citable documents20137
Non-citable documents201413
Non-citable documents201512
Non-citable documents201613
Non-citable documents20178
Non-citable documents201811
Non-citable documents201915
Non-citable documents202024
Non-citable documents202120
Non-citable documents202217
Non-citable documents202310
Citable documents1999363
Citable documents2000283
Citable documents2001217
Citable documents2002224
Citable documents2003234
Citable documents2004263
Citable documents2005280
Citable documents2006321
Citable documents2007343
Citable documents2008330
Citable documents2009324
Citable documents2010293
Citable documents2011306
Citable documents2012321
Citable documents2013377
Citable documents2014410
Citable documents2015399
Citable documents2016350
Citable documents2017298
Citable documents2018343
Citable documents2019449
Citable documents2020568
Citable documents2021571
Citable documents2022542
Citable documents2023490

Ratio of a journal's items, grouped in three years windows, that have been cited at least once vs. those not cited during the following year.

DocumentsYearValue
Uncited documents1999201
Uncited documents2000156
Uncited documents2001134
Uncited documents2002142
Uncited documents2003134
Uncited documents2004143
Uncited documents2005132
Uncited documents2006137
Uncited documents2007158
Uncited documents2008148
Uncited documents2009119
Uncited documents201095
Uncited documents201194
Uncited documents2012129
Uncited documents2013129
Uncited documents2014132
Uncited documents2015134
Uncited documents2016103
Uncited documents201791
Uncited documents2018105
Uncited documents2019133
Uncited documents2020151
Uncited documents2021141
Uncited documents2022137
Uncited documents2023103
Cited documents1999164
Cited documents2000128
Cited documents200183
Cited documents200287
Cited documents2003108
Cited documents2004134
Cited documents2005157
Cited documents2006192
Cited documents2007193
Cited documents2008192
Cited documents2009216
Cited documents2010205
Cited documents2011218
Cited documents2012199
Cited documents2013255
Cited documents2014291
Cited documents2015277
Cited documents2016260
Cited documents2017215
Cited documents2018249
Cited documents2019331
Cited documents2020441
Cited documents2021450
Cited documents2022422
Cited documents2023397

Evolution of the percentage of female authors.

YearFemale Percent
199914.94
200013.51
200115.92
200213.83
200317.11
200415.79
200519.34
200615.00
200720.24
200815.43
200918.53
201019.73
201122.19
201219.62
201316.90
201418.06
201516.99
201621.72
201719.29
201821.38
201919.50
202020.00
202118.18
202222.57
202323.94

Evolution of the number of documents cited by public policy documents according to Overton database.

DocumentsYearValue
Overton19991
Overton20001
Overton20010
Overton20020
Overton20030
Overton20040
Overton20053
Overton20061
Overton20070
Overton20082
Overton20091
Overton20101
Overton20110
Overton20120
Overton20132
Overton20143
Overton20151
Overton20160
Overton20172
Overton20182
Overton20194
Overton20200
Overton20210
Overton20220
Overton20230

Evoution of the number of documents related to Sustainable Development Goals defined by United Nations. Available from 2018 onwards.

DocumentsYearValue
SDG201830
SDG201936
SDG202026
SDG202116
SDG202220
SDG202312

Scimago Journal & Country Rank

Leave a comment

Name * Required

Email (will not be published) * Required

* Required Cancel

The users of Scimago Journal & Country Rank have the possibility to dialogue through comments linked to a specific journal. The purpose is to have a forum in which general doubts about the processes of publication in the journal, experiences and other issues derived from the publication of papers are resolved. For topics on particular articles, maintain the dialogue through the usual channels with your editor.

Scimago Lab

Follow us on @ScimagoJR Scimago Lab , Copyright 2007-2024. Data Source: Scopus®

parallel and distributed computing research paper

Cookie settings

Cookie Policy

Legal Notice

Privacy Policy

Journal of Parallel and Distributed Computing

Volume 12 • Issue 12

  • ISSN: 0743-7315

Editor-In-Chief: V.K. Prasanna

  • 5 Year impact factor: 3.4
  • Impact factor: 3.4
  • Journal metrics

Researchers interested in submitting a special issue proposal should adhere to the submission g… Read more

Journal of Parallel and Distributed Computing

Subscription options

Institutional subscription on sciencedirect.

Researchers interested in submitting a special issue proposal should adhere to the submission guidelines.

This international journal is directed to researchers, engineers, educators, managers, programmers, and users of computers who have particular interests in parallel processing and/or distributed computing .

The Journal of Parallel and Distributed Computing publishes original research papers and timely review articles on the theory, design, evaluation, and use of parallel and/or distributed computing systems. The journal also features special issues on these topics; again covering the full range from the design to the use of our targeted systems.

Research Areas Include:

• Theory of parallel and distributed computing • Parallel algorithms and their implementation • Innovative computer architectures • Parallel programming • Applications, algorithms and platforms for accelerators • Cloud, edge and fog computing • Data-intensive platforms and applications • Parallel processing of graph and irregular applications • Parallel and distributed programming models • Software tools and environments for distributed systems • Algorithms and systems for Internet of Things • Performance analysis of parallel applications • Architecture for emerging technologies e.g., novel memory technologies, quantum computing • Application-specific architectures e.g., accelerator-based and reconfigurable architecture • Interconnection network, router and network interface architecture

Benefits to authors We also provide many author benefits, such as free PDFs, a liberal copyright policy, special discounts on Elsevier publications and much more. Please click here for more information on our author services .

Please see our Guide for Authors for information on article submission. If you require any further information or help, please visit our Support Center

Fast, Accurate and Distributed Simulation of novel HPC systems incorporating ARM and RISC-V CPUs

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options, index terms.

Computing methodologies

Modeling and simulation

Simulation support systems

Simulation environments

Simulation tools

Simulation types and techniques

Massively parallel and high-performance simulations

Electronic design automation

Modeling and parameter extraction

Emerging technologies

Analysis and design of emerging devices and systems

Emerging simulation

Network performance evaluation

Network simulations

Software and its engineering

Software organization and properties

Software system structures

Software architectures

Simulator / interpreter

Recommendations

Performance evaluation of various risc processor systems: a case study on arm, mips and risc-v.

RISC-V is a new instruction set architecture (ISA) that has emerged in recent years. Compared with previous computer instruction architectures, RISC-V has outstanding features such as simple instructions, modular instruction set and supporting ...

Low overhead dynamic binary translation on ARM

The ARMv8 architecture introduced AArch64, a 64-bit execution mode with a new instruction set, while retaining binary compatibility with previous versions of the ARM architecture through AArch32, a 32-bit execution mode. Most hardware implementations ...

A SMT-ARM simulator and performance evaluation

Exponential growth in the number of on-chip transistors with smaller size, make each generation of embedded microprocessors capable to supply more processing ability. In this paper a microarchitecture approach is proposed to make a simultaneous ...

Information

Published in.

cover image ACM Conferences

  • Patrizio Dazzi ,
  • Gabriele Mencagli ,
  • Program Chair:
  • David Lowenthal ,
  • Program Co-chair:
  • Rosa M Badia
  • SIGARCH: ACM Special Interest Group on Computer Architecture

In-Cooperation

  • SIGHPC: ACM Special Interest Group on High Performance Computing, Special Interest Group on High Performance Computing

Association for Computing Machinery

New York, NY, United States

Publication History

Check for updates, author tags.

  • hpc simulator
  • distributed systems simulator
  • Short-paper

Funding Sources

  • RED-SEA EuroHPC
  • Vitamin-V Horizon Europe

Acceptance Rates

Contributors, other metrics, bibliometrics, article metrics.

  • 0 Total Citations
  • 4 Total Downloads
  • Downloads (Last 12 months) 4
  • Downloads (Last 6 weeks) 5

View options

View or Download as a PDF file.

View online with eReader .

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

CS 87: Parallel and Distributed Computing — Fall 2021

Announcements, course goals, course structure, class schedule, required readings, about the cs labs, working with partners, absence / assignment extension policy, academic accommodations, academic integrity policy, class resources, distributed systems, parallel systems, and cluster links, reading, writing and presentation advice, unix and programming resources.

Project Demo signup sheet is available. See the Part 3 assignment page.

Course Project Part3 assignments: project presentation, project report, project demo, and project code

Class announcements will be posted here and/or sent as email. You are expected to check your email frequently for class announcements. Questions and answers about assignments should be posted on EdStem page. All assignments are posted to the class Class Schedule . Here is a shortcut link to the current week’s assginments

Professor: Tia Newhall ,

Office hours: Wednesdays 2-4, and by appointment, Sci 249

Class Meetings::

Paper Discussion: Mondays, Section A: 1:15-2:45, Section B: 3:00-4:30, Sci 104

Lecture: Tuesdays, 1:15-2:30, Sci 104

Lab: Thursdays, 1:15-2:30, Sci 256

EdStem: Q&A Forum

GitHub: CS87 Swarthmore GitHub Org

Course Description

This course covers a broad range of topics related to parallel and distributed computing, including parallel and distributed architectures and systems, parallel and distributed programming paradigms, parallel algorithms, and scientific and other applications of parallel and distributed computing. In lecture/discussion sections, students examine both classic results as well as recent research in the field. The lab portion of the course includes programming projects using different programming paradigms, and students will have the opportunity to examine one course topic in depth through an open-ended project of their own choosing. Course topics may include: multi-core, SMP, MMP, client-server, clusters, clouds, grids, peer-to-peer systems, GPU computing, scheduling, scalability, resource discovery and allocation, fault tolerance, security, parallel I/0, sockets, threads, message passing, MPI, RPC, distributed shared memory, data parallel languages, MapReduce, parallel debugging, and applications of parallel and distributed computing.

Class will be run as a combination of lecture and seminar-style discussion. During the discussion based classes, students will read research papers prior to the class meeting that we will discuss in class. The first half of the course will focus on different parallel and distributed programming paradigms. During the second half, students will propose and carry out a semester-long research project related to parallel and/or distributed computing.

Prereqs : CS31 and CS35 required; prior upper-level CS course experience required. Designated: NSE, W (Writing Course), CS Group 2 Course

Analyze and critically discuss research papers both in writing and in class

Formulate and evaluate a hypothesis by proposing, implementing and testing a project

Relate one’s project to prior research via a review of related literature

Write a coherent, complete paper describing and evaluating a project

Orally present a clear and accessible summary of a research work

Understand the fundamental questions in parallel and distributed computing and analyze different solutions to these questions

Understand different parallel and distributed programming paradigms and algorithms, and gain practice in implementing and testing solutions using these.

CS87 is a seminar-style course. Its structure is designed as a bridge from lecture-based learning to inquiry-based cooperative learning that is the norm in post-Swarthmore experiences, be it graduate studies or work in industry. Although there will be some lecture, all work in this class is cooperative . This includes working in small groups to solve problems, to prepare for class discussion, to produce solutions to written and lab assigmnets, to deliver presentations, and to carry out all parts of the course project, The result of this type of course structure is that you are directly responsible for a large part of the success or failure of this class.

This is a very tentative schedule. It will be updated as we go.

  • Introduction to Parallel and Distributed Computing

Weekly Reading :

  • Chapt 5.9 (CPUs today)
  • Chapt 15 intro (parallel systems)
  • Parallel Computing Sections A-C (overview-arch)
  • Intro to Distributed Computing, sections 1-2

Lab 0 : resources review for CS87

Paper 1 : parallel system

Lab 1 : pthreads scalability

  • Parallel and Distributed Systems
  • Chapt 14.4 Performance Meausres
  • Read Reading Group and Reaction Notes Guide
  • Reading Groups and Papers
  • Paper 2 : End-to-End, NW protocols
  • Thurs : experiment tools

Drop/add ends

  • Distributed Systems
  • Intro to Distributed Computing, read sections 1-2, skim 3
  • Chapt 15.2-15.2.2 (distributed memory)

Paper 3 : Parallel Languages

Lab 2 : client/server

  • Parallel Languages
  • Section D: programming models
  • skim Chapt 15.2.3 (MPI)
  • skim Chapt 14.7 (openMP)

Paper 4 : Heterogeneous Computing

Thurs : part 1 demo

  • Parallel Languages and Algorithms
  • GPGPU Computing
  • skim Section E.i-E.ix: parallel design
  • Chapt 15.1 (gpus and cuda)

Paper 5 : Map Reduce

in lab : cuda examples

Lab 3 : cuda

  • Parallel Languages/Algorithms
  • skim Chapt 15.3 (cloud, MapReduce)
  • Chapt 15.2 (MPI examples)

Paper 6 : Quick Read

XSEDE : get an XSEDE account (do Step 1)

Lab 4 : mpi

Course Project : general info

  • Parallel Algorithms
  • Peer-to-Peer Systems
  • review Chapt 11.1 (memory heirarchy)

Paper 7 : Distributed Shared Memory

Sign-up : Lab 3 Demo

XSEDE : Do Steps 1-3 before Thurs

Course Project : proposal assignment

Lab 4.B : experiments

  • Distributed Shared Memory
  • review Chapt 14.7 (openMP)

Paper 8 : GFS

  • Distributed File Systems
  • Paper 9 : Cloud Computing
  • Cloud Computing
  • Green Computing
  • Paper 10 : Failure
  • Course Project : about project work week
  • Project Work Week
  • Course Project : Poject Work Week
  • Course Project : Midway Report

Thanksgiving break (Nov 25)

  • Paper 11 : Edge Computing

Course Project : Midway presentation

  • Course Project : Project Part 3
  • Edge Computing

Paper 12 : Security

Final Presentations, 2-5pm, 7-10pm, Sci Cntr 104

About Course Work/Policies

All work in CS87 will be done with a partner or in a small group. I will assign you to a reading group and I will assign partners for the lab assignments in the first half of the semester. You may pick your own project group for the course project. Project groups typically are two or three students in size. No solo course projects are allowed .

Most lab solutions you will submit electronically with git: you must do a git add, commit, and push before the due date to submit your solution on time. You may push your assignment multiple times, and a history of previous submissions will be saved. You are encouraged to push your work regularly.

Some assignments have additional submission requirements.

There is no required textbook for this courses. Instead, there will be required readings from on-line resources posted to the class schedule. In addition, we will also read and discuss one or two research papers most weeks. Research paper assignments will be posted to the class schedule and also be listed off the paper reading schedule (available week 2). The paper assigments will be updated over the course of the semester, so check this page weekly.

You will be assigned to a reading group for the semester. Your reading group will meet weekly to:

discuss the weekly assigned paper(s) before the in-class discussion.

write Reaction Notes to the assigned papers ( submit before 9am on Monday discussion day , and bring with you to class class discussion).

There may also be some on-line background readings off the Class Schedule related to the weekly assigned paper(s).

I encourage you to work in one of the CS labs when possible (vs. remotely logging in), particularly when working with a partner. The CS labs (rooms 240, 256, 238 and Clother basement) are open 24 hours a day, 7 days a week for you to use for CS course work. With the exception of times when a class, or lab, or ninja session is scheduled in one of these rooms, you may work anytime in a CS lab on CS course work. The overflow lab (238) is always available.

CS lab machines are for CS course work only. There are other computer lab/locations on campus that are for general-purpose computer use. Please review the CS Lab Rules about appropriate use of CS labs.

Info about CS lab machines

A list of CS lab machines specs is available here: CS lab machine specs

To see a listing of which machines are in which lab, look at hosts files:

To find the lab in which a machine is located run whereis :

Accessing the CS labs after hours

You can use your ID to gain access to the computer labs at nights and on the weekends. Just wave your ID over the onecard reader next to the lab doors. When the green light goes on, just push on the door handle to get in (the door knob will not turn). If you have issues with the door locks, send an email to [email protected] . If the building is locked, you can use your ID to enter the door between Martin and Cornell library. For this class, your ID will give you access to the labs in rooms SCI 238, 240, 256, and the Clothier basement.

For partnered lab assignments, you should follow these guidelines:

The expectation is that you and your partner are working together side by side in the lab for most, if not all, of the time you work on partnered lab assignments.

You and your partner should work on all aspects of the project together: initial top-down design, incremental testing and debugging, and final testing and code review.

If you are pair programming, where one of you types and one of you watches and assists, then you should swap roles periodically, taking turns doing each part.

At the end of a joint editing session, or as you change roles within a session, make sure that the "driver" does a git add , git commit and git push to push your changes to your shared repo, and your partner does a git pull to grab them so that both you and your partner have the latest version of your joint work in your local copies of your repo.

There may be short periods of time where you each go off and implement some small part independently. However, you should frequently come back together, talk through your changes, push and pull each other’s code from the git repository, and test your merged code together.

You should not delete or significantly alter code written by your partner when they are not present. If there is a problem in the code, then meet together to resolve it.

You and your partner are both equally responsible for initiating scheduling times when you can meet to work together, and for making time available in your schedule for working together.

If there are any issues with your partnership that you are unable to resolve, please come see me.

Taking time to design a plan for your solution together and to doing incremental implementation and testing together may seem like it is a waste of time, but in the long run it will save you a lot of time by making it less likely that you have design or logic errors in your solution, and by having a partner to help track down bugs and to help come up with solutions to problems.

Partnerships where partners work mostly independently rarely work out well and rarely result in complete, correct and robust solutions. Partnerships where partners work side-by-side for all or most of the time tend to work out very well.

All students start the course with 2 "late lab" day to be used on lab assignments only at your discretion, with no questions asked. You may not use late days on any other course work; all other course work must be submitted by its due date.

To use a late day on a lab assignment, you must email your professor after you have completed the lab and pushed to your repository. You do not need to inform anyone ahead of time. When you use late time, you should still expect to work on the newly-released lab during the following lab section meeting. The professor will always prioritize answering questions related to the current lab assignment.

Your late days will be counted at the granularity of full days and will be tracked on a per-student (NOT per-partnership) basis . That is, if you turn in an assignment five minutes after the deadline, it counts as using one day. For partnered labs, using a late day counts towards the late days for each partner. In the rare cases in which only one partner has unused late days, that partner’s late days may be used, barring a consistent pattern of abuse.

If you feel that you need an extension on an assignment or that you are unable to attend class for two or more meetings due to a medical condition (e.g., extended illness, concussion, hospitalization) or other emergency, you must contact the dean’s office and your instructors. Faculty will coordinate with the deans to determine and provide the appropriate accommodations. Note that for illnesses, the College’s medical excuse policy , states that you must be seen and diagnosed by the Worth Health Center if you would like them to contact your class dean with corroborating medical information.

Late days cannot be used for any written assignments or oral presentations in class. Reaction notes and all project components must be submitted on time.

If you believe you need accommodations for a disability or a chronic medical condition, please contact Student Disability Services via email at [email protected] to arrange an appointment to discuss your needs. As appropriate, the office will issue students with documented disabilities or medical conditions a formal Accommodations Letter. Since accommodations require early planning and are not retroactive, please contact Student Disability Services as soon as possible. For details about the accommodations process, visit the Student Disability Service Website .

You are also welcome to contact me privately to discuss your academic needs. However, all disability-related accommodations must be arranged, in advance, through Student Disability Services. To receive an accommodation for a course activity you must have an official Accommodations Letter and you need to meet with me to work out the details of your accommodation at least two weeks prior to any activity requiring accommodations .

The CS Department Academic Integrity Policy:

Academic honesty is required in all your work. Under no circumstances may you hand in work done with or by someone else under your own name. Discussing ideas and approaches to problems with others on a general level is encouraged , but you should never share your solutions with anyone else nor allow others to share solutions with you. You may not examine solutions belonging to someone else, nor may you let anyone else look at or make a copy of your solutions. This includes, but is not limited to, obtaining solutions from students who previously took the course or solutions that can be found online. You may not share information about your solution in such a manner that a student could reconstruct your solution in a meaningful way (such as by dictation, providing a detailed outline, or discussing specific aspects of the solution). You may not share your solutions even after the due date of the assignment.

In your solutions, you are permitted to include material which was distributed in class, material which is found in the course textbook, and material developed by or with an assigned partner. In these cases, you should always include detailed comments indicating on which parts of the assignment you received help and what your sources were.

When working on tests, exams, or similar assessments, you are not permitted to communicate with anyone about the exam during the entire examination period (even if you have already submitted your work). You are not permitted to use any resources to complete the exam other than those explicitly permitted by course policy. (For instance, you may not look at the course website during the exam unless explicitly permitted by the instructor when the exam is distributed.)

Failure to abide by these rules constitutes academic dishonesty and will lead to a hearing of the College Judiciary Committee. According to the Faculty Handbook:

Because plagiarism is considered to be so serious a transgression, it is the opinion of the faculty that for the first offense, failure in the course and, as appropriate, suspension for a semester or deprivation of the degree in that year is suitable; for a second offense, the penalty should normally be expulsion.

This policy applies to all course work, including but not limited to code, written solutions (e.g. proofs, analyses, reports, etc.), exams, and so on. This is not meant to be an enumeration of all possible violations; students are responsible for seeking clarification if there is any doubt about the level of permissible communication.

The general ethos of this policy is that actions which shortcut the learning process are forbidden while actions which promote learning are encouraged. Studying lecture materials together, for example, provides an additional avenue for learning and is encouraged. Using a classmate’s solution, however, is prohibited because it avoids the learning process entirely. If you have any questions about what is or is not permissible, please contact your instructor.

In CS87 all of your work is collaborative, and you may share anything with your collaborators. If you use tools, solutions, ideas, or resources from other sources in your project, you must credit those sources. Reaction notes should be your interpretation of the work in the papers. Cutting and pasting content from the papers as your reactions violates the academic integrity policy.

Please contact me if you have any questions about what is permissible in this course.

Grades will be weighted as follows:

approx 35%: Class Participation, Paper discussion, Reaction Notes Each week we will discuss papers in a seminar style format. This course has some lecture content with in-class activities too, but much of the course content is generated by the class as a whole. You need to be present to contribute. Doing a close reading of the assigned papers and writing responses and questions prior to each class is essential to preparing for a lively and informed discussion. In this style of class, you are responsible for contributing to its content; what you get out of this class is dependent on what you put into paper discussions, presentations and other class participation.

approx 25%: Labs I will assign labs in the first part of the semester. You will do lab work with different assigned partners for each lab.

40%: Course Project You will design a project related to parallel and distributed computing that you will carry out over the second half of the semester. Projects must be done in pairs or small groups; no solo projects are allowed.

Reaction Notes Information

Assigned papers, reaction notes questions, reading groups

CS87 EdStem Q&A Forum

CS87 GitHub Org

Help using git for lab assignments

more verbose git setup for CS labs (written for CS31 students)

Introduction to Parallel Computing from Livermore Computing

Links to Parallel and Network Programming Resources MPI, OpenMP, posix threads, socket programming, CUDA…​

Some Cluster and Distributed Systems Papers

IEEE Distributed Systems Online

The Top 500 List

The Green 500 List

ParaScope IEEE Listing of Parallel Computing Sites

Research and Writing Guide the guide for doing CS course projects and written project reports

Tips for preparing a research presentation

How to read a CS research paper

Other reading, writing, presentation links

Swarthmore Writing Center and Writing Associates

My CS and Unix Help Pages and Links (make, tar, git, degugging tools, editors, programming guides, linux and parallel computing links, …​)

CS Department’s Help Pages

Dive into Systems Textbook (C programming, debugging, computer systems, intro to parallel programming, …​)

CS Project Etiquette tools and guidelines for using shared CS resources to run long-running, intensive applications.

Tools for running large Experiements (screen, nice, profing tools, script, …​)

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

algorithms-logo

Article Menu

parallel and distributed computing research paper

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Fuzzy modelling algorithms and parallel distributed compensation for coupled electromechanical systems.

parallel and distributed computing research paper

1. Introduction

1.1. related work, 1.2. contributions, 2. simulation platform, 3. t–s fuzzy modelling, 3.1. fuzzy identification of a second-order system.

Fuzzy system identification and m parameter optimization.
← Load open_loop_experimental_data.mat                      ▹ Number of rules or clusters Total number of data  fuzzy c-means( )   i ← 1 to c                    ▹ Obtaining of consequents Equation ( )                      ▹ Least squares    PSO Algorithm                  ▹ Open Loop System  m← 1.1 to 3                      ▹ Fuzzy Parameter m using Equation ( )            ▹ Objective function  m

3.2. Optimization of the Fuzzy Parameter m

Pole placement optimization and PDC control.
 PSO Algorithm                  ▹ Close Loop System   to                           ▹ Poles using Equation ( )              ▹ Objective function   for each using pole assignment  PDC Control( ) i ← 1 to N  ) ) ) )

3.3. Closed-Loop System Poles Optimization

4. pdc fuzzy control, comparison of implementing a pdc vs. pd controller, 5. conclusions, author contributions, data availability statement, acknowledgments, conflicts of interest, abbreviations.

ICEInternal Combustion Engine
MSEMean Square Error
MSOMean Square Output
PDCParallel Distributed Controller
PGSPower Generation System
PSOParticle Swarm Optimization
QoFQuality of Fit
T–STakagi–Sugeno
  • Zaccone, R.; Campora, U.; Martelli, M. Optimisation of a diesel-electric ship propulsion and power generation system using a genetic algorithm. J. Mar. Sci. Eng. 2021 , 9 , 587. [ Google Scholar ] [ CrossRef ]
  • Xie, Y.; Savvaris, A.; Tsourdos, A. Fuzzy logic based equivalent consumption optimization of a hybrid electric propulsion system for unmanned aerial vehicles. Aerosp. Sci. Technol. 2019 , 85 , 13–23. [ Google Scholar ] [ CrossRef ]
  • Aliramezani, M.; Koch, C.R.; Shahbakhti, M. Modeling, diagnostics, optimization, and control of internal combustion engines via modern machine learning techniques: A review and future directions. Prog. Energy Combust. Sci. 2022 , 88 , 100967. [ Google Scholar ] [ CrossRef ]
  • Bhatt, A.N.; Shrivastava, N. Application of artificial neural network for internal combustion engines: A state of the art review. Arch. Comput. Methods Eng. 2022 , 29 , 897–919. [ Google Scholar ] [ CrossRef ]
  • Sun, L.; You, F. Machine learning and data-driven techniques for the control of smart power generation systems: An uncertainty handling perspective. Engineering 2021 , 7 , 1239–1247. [ Google Scholar ] [ CrossRef ]
  • Zhao, S.; Blaabjerg, F.; Wang, H. An overview of artificial-intelligence applications for power electronics. IEEE Trans. Power Electron. 2020 , 36 , 4633–4658. [ Google Scholar ] [ CrossRef ]
  • Karamanakos, P.; Liegmann, E.; Geyer, T.; Kennel, R. Model predictive control of power electronic systems: Methods, results, and challenges. IEEE Open J. Ind. Appl. 2020 , 1 , 95–114. [ Google Scholar ] [ CrossRef ]
  • Abdalla, A.N.; Nazir, M.S.; Tao, H.; Cao, S.; Ji, R.; Jiang, M.; Yao, L. Integration of energy storage system and renewable energy sources based on artificial intelligence: An overview. J. Energy Storage 2021 , 40 , 102811. [ Google Scholar ] [ CrossRef ]
  • Di Giovine, G.; Mariani, L.; Di Battista, D.; Cipollone, R.; Fremondi, F. Modeling and experimental validation of a triple-screw pump for internal combustion engine cooling. Appl. Therm. Eng. 2021 , 199 , 117550. [ Google Scholar ] [ CrossRef ]
  • Amoiralis, E.I.; Tsili, M.A.; Spathopoulos, V.; Hatziefremidis, A. Energy efficiency optimization in uavs: A review. In Proceedings of the Materials Science Forum, Xi’an, China, 8–9 August 2014; Trans Tech Publications Ltd.: Stafa-Zurich, Switzerland, 2014; Volume 792, pp. 281–286. [ Google Scholar ]
  • Technologies, T.F. Hybrid VTOL: Increased Energy Density for Increased Payload and Endurance. 2017. Available online: https://vtol.org/files/dmfile/24-TVF2018-Debitetto-TopFlight-Jan191.pdf (accessed on 4 March 2024).
  • Sullivan UV. Acutronic Power Systems. 2019. Available online: https://www.sullivanuv.com/ (accessed on 10 April 2024).
  • Babuška, R. Fuzzy Modeling for Control ; Springer: Dordrecht, The Netherlands, 1998. [ Google Scholar ] [ CrossRef ]
  • Zhang, X.; Liu, L.; Dai, Y. Fuzzy state machine energy management strategy for hybrid electric UAVs with PV/fuel cell/battery power system. Int. J. Aerosp. Eng. 2018 , 2018 , 1–16. [ Google Scholar ] [ CrossRef ]
  • Remoaldo, D.; Jesus, I. Analysis of a Traditional and a Fuzzy Logic Enhanced Perturb and Observe Algorithm for the MPPT of a Photovoltaic System. Algorithms 2021 , 14 , 24. [ Google Scholar ] [ CrossRef ]
  • Bezdek, J.C. A physical interpretation of fuzzy ISODATA. In Readings in Fuzzy Sets for Intelligent Systems ; Elsevier: Amsterdam, The Netherlands, 1993; pp. 615–616. [ Google Scholar ] [ CrossRef ]
  • Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms ; Springer Science & Business Media: New York, NY, USA, 2013. [ Google Scholar ]
  • Wu, K.L. Analysis of parameter selections for fuzzy c-means. Pattern Recognit. 2012 , 45 , 407–415. [ Google Scholar ] [ CrossRef ]
  • Zhou, K.; Fu, C.; Yang, S. Fuzziness parameter selection in fuzzy c-means: The perspective of cluster validation. Sci. China Inf. Sci. 2014 , 57 , 1–8. [ Google Scholar ] [ CrossRef ]
  • Shrivastava, N.; Khan, Z.M. Application of soft computing in the field of internal combustion engines: A review. Arch. Comput. Methods Eng. 2018 , 25 , 707–726. [ Google Scholar ] [ CrossRef ]
  • Khang, T.D.; Tran, M.K.; Fowler, M. A Novel Semi-Supervised Fuzzy C-Means Clustering Algorithm Using Multiple Fuzzification Coefficients. Algorithms 2021 , 14 , 258. [ Google Scholar ] [ CrossRef ]
  • Manjarrez, L.H.; Ramos-Fernández, J.C.; Espinoza, E.S.; Lozano, R. Estimation of Energy Consumption and Flight Time Margin for a UAV Mission Based on Fuzzy Systems. Technologies 2023 , 11 , 12. [ Google Scholar ] [ CrossRef ]
  • You, D.; Lei, Y.; Liu, S.; Zhang, Y.; Zhang, M. Networked Control System Based on PSO-RBF Neural Network Time-Delay Prediction Model. Appl. Sci. 2022 , 13 , 536. [ Google Scholar ] [ CrossRef ]
  • Shouran, M.; Alsseid, A. Particle swarm optimization algorithm-tuned fuzzy cascade fractional order PI-fractional order PD for frequency regulation of dual-area power system. Processes 2022 , 10 , 477. [ Google Scholar ] [ CrossRef ]
  • Benevieri, A.; Carbone, L.; Cosso, S.; Kumar, K.; Marchesoni, M.; Passalacqua, M.; Vaccaro, L. Series Architecture on Hybrid Electric Vehicles: A Review. Energies 2021 , 14 , 7672. [ Google Scholar ] [ CrossRef ]
  • Takagi, T.; Sugeno, M. Fuzzy identification of systems and its applications to modeling and control. IEEE Trans. Syst. Man Cybern. 1985 , SMC-15 , 116–132. [ Google Scholar ] [ CrossRef ]
  • Nayak, J.; Naik, B.; Behera, H. Fuzzy C-Means (FCM) Clustering Algorithm: A Decade Review from 2000 to 2014. In Proceedings of the Computational Intelligence in Data Mining, Bhubaneswar, India, 5–6 December 2015; Springer: New Delhi, India, 2014; Volume 2, pp. 133–149. [ Google Scholar ]
  • Setnes, M.; Babuska, R.; Verbruggen, H.B. Rule-based modeling: Precision and transparency. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 1998 , 28 , 165–169. [ Google Scholar ] [ CrossRef ]
  • Lilly, J.H. Fuzzy Control and Identification ; John Wiley & Sons: Hoboken, NJ, USA, 2010. [ Google Scholar ] [ CrossRef ]
  • Rubio, I.; Guijarro, G.; Garcia, L.; Hespanha, J.; Xie, J. Translational Model Identification and Robust Control for the Parrot Mambo UAS. In Proceedings of the IEEE GLOBECOM 2019 Workshop on Computing-Centric Drone Networks, Waikoloa, HI, USA, 9–14 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [ Google Scholar ] [ CrossRef ]
  • Hespanha, J.P. Advanced Undergraduate Topics in Control Systems Design. 2023. Available online: https://web.ece.ucsb.edu/~hespanha/published/allugtopics-20230402.pdf (accessed on 13 January 2024).
  • Hashim, H.A.; El-Ferik, S.; Abido, M.A. A fuzzy logic feedback filter design tuned with PSO for L1 adaptive controller. Expert Syst. Appl. 2015 , 42 , 9077–9085. [ Google Scholar ] [ CrossRef ]
  • Praseeda, C.; Shivakumar, B. Fuzzy particle swarm optimization (FPSO) based feature selection and hybrid kernel distance based possibilistic fuzzy local information C-means (HKD-PFLICM) clustering for churn prediction in telecom industry. SN Appl. Sci. 2021 , 3 , 613. [ Google Scholar ] [ CrossRef ]
  • Clerc, M.; Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002 , 6 , 58–73. [ Google Scholar ] [ CrossRef ]
  • Wang, H.O.; Tanaka, K. Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach ; John Wiley & Sons: New York, NY, USA, 2004. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Centroids u
145.596045.58580.7158
238.883738.87100.6258
321.252121.24160.3369
425.300423.99190.4860
010010
0.8465 14.91861.0667 3.7456
010010
0.9635 7.05160.9589 9.4053
Optimal
Value m
Bounds# of
Particles
Inertia
Coefficient
# of
Iterations
2.97 3000.09170.5959100
Optimal
Poles
Bounds# of
Particles
Inertia
Coefficient
# of
Iterations
300 100
ISEIAEITAE
With PSO2.6677 51.65018.02
Without PSO6.9970 2.6452 2.1061
GainsValues
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Reyes, C.; Ramos-Fernández, J.C.; Espinoza, E.S.; Lozano, R. Fuzzy Modelling Algorithms and Parallel Distributed Compensation for Coupled Electromechanical Systems. Algorithms 2024 , 17 , 391. https://doi.org/10.3390/a17090391

Reyes C, Ramos-Fernández JC, Espinoza ES, Lozano R. Fuzzy Modelling Algorithms and Parallel Distributed Compensation for Coupled Electromechanical Systems. Algorithms . 2024; 17(9):391. https://doi.org/10.3390/a17090391

Reyes, Christian, Julio C. Ramos-Fernández, Eduardo S. Espinoza, and Rogelio Lozano. 2024. "Fuzzy Modelling Algorithms and Parallel Distributed Compensation for Coupled Electromechanical Systems" Algorithms 17, no. 9: 391. https://doi.org/10.3390/a17090391

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Help | Advanced Search

Computer Science > Distributed, Parallel, and Cluster Computing

Title: rapid gpu-based pangenome graph layout.

Abstract: Computational Pangenomics is an emerging field that studies genetic variation using a graph structure encompassing multiple genomes. Visualizing pangenome graphs is vital for understanding genome diversity. Yet, handling large graphs can be challenging due to the high computational demands of the graph layout process. In this work, we conduct a thorough performance characterization of a state-of-the-art pangenome graph layout algorithm, revealing significant data-level parallelism, which makes GPUs a promising option for compute acceleration. However, irregular data access and the algorithm's memory-bound nature present significant hurdles. To overcome these challenges, we develop a solution implementing three key optimizations: a cache-friendly data layout, coalesced random states, and warp merging. Additionally, we propose a quantitative metric for scalable evaluation of pangenome layout quality. Evaluated on 24 human whole-chromosome pangenomes, our GPU-based solution achieves a 57.3x speedup over the state-of-the-art multithreaded CPU baseline without layout quality loss, reducing execution time from hours to minutes.
Comments: SC 2024
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Computational Engineering, Finance, and Science (cs.CE); Data Structures and Algorithms (cs.DS)
Cite as: [cs.DC]
  (or [cs.DC] for this version)
  Focus to learn more arXiv-issued DOI via DataCite (pending registration)

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

parallel and distributed computing research paper

Top 10 Research Topics in Parallel and Distributed Computing

The specific pressure in locations of the internet with concurrent enhancement in the availability of big data with several users has to accurate the computing tasks in parallel. Parallel and distributed computing will take place in several research areas such as networks, software engineering, computer science, computer architecture, operating systems, algorithms, etc. At present, our research experts are providing complete research support and research guidance for all the research topics in parallel and distributed computing. The ideas based on an essential system of parallel and distributed computing are highlighted below shared memory models, mutual exclusion, concurrency, message passing, memory manipulation, etc.

  Parallel computing is deployed for the provision of high-speed power of processing where it is required and supercomputers are the best example for parallel computing. In this process, distributed computing is accustomed when the geographical locations are differing for the computers.

  • Software-defined fog node in blockchain architecture & cloud computing
  • Multi clustering approach in mobile edge computing
  • Distributed computing & smart city services
  • Geo distributed fog computing
  • Service attacks in software-defined network with cloud computing
  • Distributed trust protocol for IaaS cloud computing
  • Large scale convolutional neural networks
  • Parallel vertex-centric algorithms
  • Partitioning algorithms in mobile environments
  • Configuration tuning for hierarchical cloud schedulers
  • Distributed computing with delay tolerant network

We provide the research work with the implementation of research algorithms, methodologies that shape the research projects with the proper execution and appropriate code implementation. 

Top 5 Research Topics in Parallel and Distributed Computing

Parallel Computing

            Parallel computing delivers the simultaneous process and it is used to save both money and time. In general, the memory in a parallel system might be two-dimensional such as disseminated and collective . The processors in parallel computing have to perform numerous tasks which are assigned to the processors concurrently.

Distributed Computing

           Distributed computing is entirely different from the parallel computing process because here in distributed computing a task is separated between several computers . In addition, the computers can pass the messages among them and the shared memory is not used. Several autonomous computers appear as one computer for the users. 

What are the Characteristics of Parallel and Distributed Computing?

  • It is based on the process of communication and location of what the node have to access to other nodes
  • It is the process of detecting failures and how the system is recovered as soon as possible
  • The process of computing and processing has some accumulation in several machines
  • The process similar task happing in several machines at a particular time
  • The frankness of software structure and its enhancement
  • Distribution of hardware and software data

Parallel Computing Versus Distributed Computing

  • Parallel and distributed computing are different from each other. In distributed computing, several computers are seemed together as a single system for the users to perform a single task by messages among them. In parallel computing, a single task is split into several tasks and allocated to various system
  • In distributed computing, the high scalability place is required for usage. In parallel computing, the place which has high speed is preferred for the performance
  • A similar master clock is used for the synchronization in parallel computing and the synchronization algorithms are used in distributed computing
  • Distributed computing is used to have their memory and processers. In parallel computing, one memory is shared for all the processors
  • The limited scalability is used in parallel computing and without any limitations the systems are working in addition to networks in distributed computing
  • There is no dependency in distributed computing and parallel, it is fully dependent on each other because the output of one process is the input of the next task
  • A single system is involved in parallel computing as multiple hosts. In distributed computing, many systems are available in the computer system 

Distributed Parallel Computing

           The process of distributed parallel computing system is deployed for the functions of several computers in a single network with their allocated task . In general, we are using many applications based on the distributed and parallel computing system such as

  • Grid Computing
  • Cloud computing
  • Distributed supercomputers
  • Travel reservation
  • Electronic banking
  • Cloud storage system
  • Internet, intranet & email system
  • Peer to peer network

Below, our research experts have mentioned the pioneering research topics in parallel and distributed computing , it is a significant research system and it is used to locate the various geographical locations through computers . As per the data, the research fields in parallel and distributed computing are as follows

Recent Research Areas of Parallel and Distributed Computing

  • Heterogeneous computing
  • Biological & molecular computing
  • Supercomputing
  • Computational intelligence
  • Quality development using HPC
  • Distributed data storage & cloud architecture
  • Federated ML & shared memory
  • Fault tolerance software system
  • MI & AL
  • Distributed grid computing
  • Web technologies
  • Distribution & management system in multimedia
  • Mobile crowdsensing
  • IoT & multi-tier computing

           At present, we can see the issues from different sources in parallel and distributed computing . Thus, our research experts provide better research solutions for all such research challenges mentioned below. At this moment, let us discuss the significant research challenges in parallel and distributed computing.

Latest Research Issues of Parallel and Distributed Computing

  • There are some additional functions in this process such as logging, intelligence, load balancing, monitoring, etc. All the above-mentioned functions are used for the visibility
  • There is no appropriate message communication (messages are delivered wrongly to other nodes) that seems to breakdown the communication
  • It is used to coordinate the sequence of changes in data and it is a complex issue in the distributed computing process and it leads the nodes to fall, stop and start
  • It is the outline of multi-purpose stream processing
  • It leads to let down the functions of nodes
  • Functions of structural designs used in the linear process with rational quantity
  •  It functions as a warehouse of data in significant corporations

Thus, by solving all such research challenges in parallel and distributed computing our technical experts have shared some significant requirements of parallel and distributed computing. And, that helps the research scholars to be familiar with the most substantial real-time requirements in current research topics in parallel and distributed computing research.

Future Research Directions of Parallel and Distributed Computing Projects

  • Distributed memory parallel computing
  • Distinctive purpose & hybrid structural design
  • Accelerators & multicore functions
  • Cloud Computing
  • High performance & shared memory computing
  • Developing domain applications
  • Structure of supercomputing & applications

The research scholars can get the best guidance for handling parallel and distributed computing tools from our research and development experts. In this regard let us see about some of the important and best-distributed computing tools below

Development Tools for Parallel and Distributed Computing Projects

  • DAGH & CUDA
  • ARCH & MPICH
  • PPGP & PADE
  • Zabbix & Nimrod
  • SPRNG & Apache Hadoop
  • Paralib & simGrid
  • Alchemi & distributed folding GUI

To this end, we believe that you get the top to bottom way out to select the research topics in parallel and distributed computing. The above information will make you a better research scholar to precede your research in parallel and distributed computing. Yet, if you want to become an expert, then you must need a better tutor. In addition, we have several research experts for the scholar’s research assistance . We are ready to provide help and clear up all your difficulties at any stage. So, you can enrich your skills through our keen help.

Technology Ph.D MS M.Tech
NS2 75 117 95
NS3 98 119 206
OMNET++ 103 95 87
OPNET 36 64 89
QULANET 30 76 60
MININET 71 62 74
MATLAB 96 185 180
LTESIM 38 32 16
COOJA SIMULATOR 35 67 28
CONTIKI OS 42 36 29
GNS3 35 89 14
NETSIM 35 11 21
EVE-NG 4 8 9
TRANS 9 5 4
PEERSIM 8 8 12
GLOMOSIM 6 10 6
RTOOL 13 15 8
KATHARA SHADOW 9 8 9
VNX and VNUML 8 7 8
WISTAR 9 9 8
CNET 6 8 4
ESCAPE 8 7 9
NETMIRAGE 7 11 7
BOSON NETSIM 6 8 9
VIRL 9 9 8
CISCO PACKET TRACER 7 7 10
SWAN 9 19 5
JAVASIM 40 68 69
SSFNET 7 9 8
TOSSIM 5 7 4
PSIM 7 8 6
PETRI NET 4 6 4
ONESIM 5 10 5
OPTISYSTEM 32 64 24
DIVERT 4 9 8
TINY OS 19 27 17
TRANS 7 8 6
OPENPANA 8 9 9
SECURE CRT 7 8 7
EXTENDSIM 6 7 5
CONSELF 7 19 6
ARENA 5 12 9
VENSIM 8 10 7
MARIONNET 5 7 9
NETKIT 6 8 7
GEOIP 9 17 8
REAL 7 5 5
NEST 5 10 9
PTOLEMY 7 8 4

Related Pages

parallel and distributed computing research paper

YouTube Channel

Unlimited Network Simulation Results available here.

Game Programming Meets Parallel Computing: Teaching Parallel Computing Concepts to Game Programmers

  • Conference paper
  • First Online: 03 September 2024
  • Cite this conference paper

parallel and distributed computing research paper

  • Neil Patrick Del Gallego 40 , 41  

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 1199))

Included in the following conference series:

  • International Conference on Advances in Computational Science and Engineering

With the rapid advancement of the gaming industry and the demand to create high-performance games, game programmers must be well-versed in concurrent programming and parallel computing concepts. This paper presents a set of project-based learning activities for teaching parallel computing concepts to game programmers. We propose four project-based activities: (1) asynchronous image loader, (2) multi-threaded ray tracer, (3) interactive loading screen, and (4) interactive 3D model viewer. Our projects require computer graphics and game programming concepts, differentiating our projects from other parallel computing course assignments. Such projects may be of interest to undergraduate students who wish to pursue game programming in the future. Similarly, the projects may be taught to game programmers who have yet to learn about parallel computing. A pilot course delivery was conducted at De La Salle University, where 39 students have enrolled. Students have generally achieved high scores ( \(>80\%\) ), implying the effectiveness of the learning content in developing the projects.

The authors would like to give thanks to the DLSU students for giving valuable feedback about the course. The authors would like to acknowledge De La Salle University and DLSU Science Foundation for funding this research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Akenine-Moller T, Haines E, Hoffman N (2019) Real-time rendering. AK Peters/CRC Press

Google Scholar  

Pharr M, Fernando R (2005) GPU Gems 2: programming techniques for high-performance graphics and general-purpose computation (GPU gems). Addison-Wesley Professional

Pharr M, Jakob W, Humphreys G (2023) Physically based rendering: from theory to implementation. MIT Press

Unterguggenberger J, Kerbl B, Wimmer M (2023) Vulkan all the way: transitioning to a modern low-level graphics API in academia. Comput Graphics 111:155–165

Article   Google Scholar  

Yazici A, Mishra A, Karakaya Z (2016) Teaching parallel computing concepts using real-life applications. Int J Eng Educ 32(2):772–781

Bogaerts SA (2017) One step at a time: parallelism in an introductory programming course. J Parallel Distrib Comput 105:4–17

Grossman M, Aziz M, Chi H, Tibrewal A, Imam S, Sarkar V (2017) Pedagogy and tools for teaching parallel computing at the sophomore undergraduate level. J Parallel Distrib Comput 105:18–30

Imam S, Sarkar V (2014) Habanero-java library: a java 8 framework for multicore programming. In: Proceedings of the 2014 international conference on principles and practices of programming on the java platform: virtual machines, languages, and tools, pp 75–86

Chesebrough RA, Turner I (2010) Parallel computing: at the interface of high school and industry. In: Proceedings of the 41st ACM technical symposium on computer science education, pp 280–284

Joiner DA, Gray P, Murphy T, Peck C (2006) Teaching parallel computing to science faculty: best practices and common pitfalls. In: Proceedings of the eleventh ACM SIGPLAN symposium on principles and practice of parallel programming, pp 239–246

Alderfer KB, Smith BK, Ontañón S, Char B, Nebolsky J, Zhu J, Furqan A, Freed E, Patterson J, Valls-Vargas J (2018) Lessons learned from an interactive educational computer game about concurrent programming. In: Proceedings of the 49th ACM technical symposium on computer science education, p 1077

Kantharaju P, Alderfer K, Zhu J, Char B, Smith B, Ontanon S (2020) Modeling player knowledge in a parallel programming educational game. IEEE Trans Games 14(1):64–75

Zhu J, Alderfer K, Furqan A, Nebolsky J, Char B, Smith B, Villareale J, Ontañón S (2019) Programming in game space: how to represent parallel programming concepts in an educational game. In: Proceedings of the 14th international conference on the foundations of digital games, pp 1–10

Zhu J, Alderfer K, Smith B, Char B, Ontañón S (2020) Understanding learners’ problem-solving strategies in concurrent and parallel programming: a game-based approach. arXiv preprint arXiv:2005.04789

The Khronos Group: Vukan rendering loop (2020) https://vkguide.dev/docs/chapter-1/vulkan_mainloop/ . Accessed 05 Nov 2023

Gregory J (2018) Game engine architecture. CRC Press

Toftedahl M, Engström H (2019) A taxonomy of game engines and the tools that drive the industry. In: DiGRA 2019, The 12th digital games research association conference, Kyoto, Japan, August, 6–10. Digital Games Research Association (DiGRA)

Huo Y, Yang A, Jia Q, Chen Y, He B, Li J (2021) Efficient visualization of large-scale oblique photogrammetry models in unreal engine. ISPRS Int J Geo Inf 10(10):643

Lee A, Chang YS, Jang I (2020) Planetary-scale geospatial open platform based on the unity3d environment. Sensors 20(20):5967

Gomila L(2023) Sfml: simple and fast multimedia library. https://www.sfml-dev.org/index.php . Accessed 11 Apr 2023

Lantinga S (2021) Sdl: Simple directmedia layer. https://www.libsdl.org/ . Accessed 04 Nov 2023

Gebali F (2011) Algorithms and parallel computing, vol 82. Wiley

Herlihy M, Shavit N, Luchangco V, Spear M (2020) The art of multiprocessor programming. Newnes

Raynal M (2012) Concurrent programming: algorithms, principles, and foundations. Springer Science and Business Media

Downey AB (2005) The little book of semaphores

Garg RP, Sharapov IA, Sharapov I (2002) Techniques for optimizing applications: high performance computing. Sun Microsystems Press Palo Alto

Glassner AS (1989) An introduction to ray tracing. Morgan Kaufmann

Shirley P, Morley RK (2008) Realistic ray tracing. AK Peters, Ltd

Bigler J, Stephens A, Parker SG (2006) Design for parallel interactive ray tracing systems. In: 2006 IEEE symposium on interactive ray tracing. IEEE, pp 187–196

Chalmers A, Reinhard E, Davis T (2002) Practical parallel rendering. CRC Press

Meister D, Boksansky J, Guthe M, Bittner J (2020) On ray reordering techniques for faster GPU ray tracing. In: Symposium on interactive 3D graphics and games, pp 1–9

Yan R, Huang L, Guo H, Lü Y, Yang L, Xiao N, Wang Y, Shen L, Lan M (2022) Rt engine: an efficient hardware architecture for ray tracing. Appl Sci 12(19):9599

Shirley P, Black TD, Hollasch S (2023) Ray tracing in one weekend. https://raytracing.github.io/books/RayTracingInOneWeekend.html . Accessed 05 Nov 2023

Muratori C (2005) Immediate-mode graphical user interfaces. https://caseymuratori.com/blog_0001 . Accessed 05 Nov 2023

Cornut O (2022) Dear ImGui library. https://github.com/ocornut/imgui . Accessed 27 July 2023

Wu Z, Song S, Khosla A, Yu F, Zhang L, Tang X, Xiao J (2015) 3d shapenets: a deep representation for volumetric shapes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1912–1920

Download references

Author information

Authors and affiliations.

De La Salle University, 2401 Taft Ave, Malate, 1004, Manila, Metro Manila, Philippines

Neil Patrick Del Gallego

Graphics, Animation, Multimedia, and Entertainment (GAME) Lab, De La Salle University, 2401 Taft Ave, Malate, 1004, Manila, Metro Manila, Philippines

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Neil Patrick Del Gallego .

Editor information

Editors and affiliations.

Technology Park Malaysia, Asia Pacific University of Technology and Innovation, Kuala Lumpur, Malaysia

Vinesh Thiruchelvam

Faculty of Computing and Informatics, Creative Advanced Machine Intelligence Research Centre, Universiti Malaysia Sabah, Kota Kinabalu, Malaysia

Rayner Alfred

Higher Colleges of Technology, Abu Dhabi, Abu Dhabi, United Arab Emirates

Zamhar Iswandono Bin Awang Ismail

Department of Informatics, Mulawarman University, Samarinda, Indonesia

Haviluddin Haviluddin

School of Engineering and Technology, Sunway University, Petaling Jaya, Selangor, Malaysia

Aslina Baharum

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper.

Del Gallego, N.P. (2024). Game Programming Meets Parallel Computing: Teaching Parallel Computing Concepts to Game Programmers. In: Thiruchelvam, V., Alfred, R., Ismail, Z.I.B.A., Haviluddin, H., Baharum, A. (eds) Proceedings of the 4th International Conference on Advances in Computational Science and Engineering. ICACSE 2023. Lecture Notes in Electrical Engineering, vol 1199. Springer, Singapore. https://doi.org/10.1007/978-981-97-2977-7_2

Download citation

DOI : https://doi.org/10.1007/978-981-97-2977-7_2

Published : 03 September 2024

Publisher Name : Springer, Singapore

Print ISBN : 978-981-97-2976-0

Online ISBN : 978-981-97-2977-7

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
EasyChair Smart CFP /
HomeNew CFPMy CFPsWatchlist
Conference website
Submission link
Submission deadlineSeptember 23, 2024

6th Workshop on Education for High Performance Computing (EduHiPC 2024)

18 December 2024, Bengaluru, India

In conjunction with the 31st IEEE International Conference on High-Performance Computing, Data, & Analytics ( HiPC 2024 )

Call for Paper Submission

High Performance Computing (HPC) and, in general, Parallel and Distributed Computing (PDC) is ubiquitous. Every computing device, from a smartphone to a supercomputer, relies on parallel processing.  Compute clusters of multicore and manycore processors (CPUs and GPUs) are routinely used in many subdomains of computer science (CS) such as computer vision, data science, parallel machine learning and high performance computing. Therefore, it is important for every programmer, software professional and CS researchers to understand how parallelism and distributed computing affect problem solving. It is essential for educators to impart a range of PDC and HPC skills and knowledge at multiple levels within the curriculum of Computer Science (CS), Computer Engineering (CE), and related disciplines such as computational data science. Software industry and research laboratories require people with these skills, more so now. Thus, they now engage in extensive on-the-job training. Additionally, rapid changes in hardware platforms, languages, and programming environments increasingly challenge educators to decide what to teach and how to teach, in order to prepare students for careers that involve PDC and HPC. EduHiPC aims to provide a forum that brings together academia, industry, government, and non-profit organizations – especially from India, its vicinity, and Asia – for exploring and exchanging experiences and ideas about the inclusion of high-performance, parallel, and distributed computing into undergraduate and graduate curriculum of Computer Science, Computer Engineering, Computational Science, Computational Engineering, and computational courses for STEM and business and other non-STEM disciplines.

The 6th EduHiPC (EduHiPC 2024) workshop invites unpublished manuscripts from individuals or teams from academia, industry, and other educational and research institutes from all over the world on topics about the teaching of PDC topics in the Computer Science and Computer Engineering curriculum as well as in domain-specific computational and data science and engineering curricula.  EduHiPC invites researchers, scholars, and practitioners to submit their work for consideration in either of the following two paper tracks or for posters or peachy assignments sessions. Additionally, we encourage manuscripts that validate their innovative approaches through the systematic collection and analysis of information to evaluate their performance and impact. The workshop is particularly dedicated to bringing together stakeholders from industry (hardware vendors, and research and development organizations), government labs, and academia in the context of HiPC 2024. The goal of the workshop is to hear the challenges faced by educators and professionals, to learn about various approaches to addressing these challenges, and to have opportunities to exchange ideas and solutions. We also encourage submissions related to the challenges in imparting education during this recent global pandemic and online evaluation mechanisms for PDC/HPC. This effort is in coordination with the Center for Parallel and Distributed Computing Curriculum Development and Educational Resources (CDER) .

Topics of interest include, but are not limited to:

  • Pedagogical issues in incorporating PDC and HPC in undergraduate and graduate education, especially in core courses.
  • Novel ways of teaching PDC and HPC topics.
  • Issues and experiences addressing remote synchronous and asynchronous teaching of PDC/HPC during the gone by pandemic situation and its relevance in current context.
  • Data science and big data aspects of teaching HPC/PDC, including early experience with data science degree programs.
  • Evidence-based educational practices for teaching HPC/PDC topics that provide evidence about what works best under what circumstances.
  • Experience with incorporating PDC and HPC topics into core CS/CE courses and in domains.
  • Experience and challenges with HPC education in developing countries, especially in India and her neighboring Asian countries.
  • Computational Science and Engineering courses.
  • Pedagogical tools, programming environments, infrastructures, languages, and projects for PDC and HPC.
  • Employers’ experiences with new hires and expectation of the level of PDC and HPC proficiency among new graduates.
  • Education resources based on high-level programming languages and environments such as Python, CUDA, OpenCL, OpenACC, SYCL, oneAPI, Hadoop, and Spark.
  • Parallel and distributed models of programming and computation suitable for teaching, learning, and workforce development.
  • Issues and experiences addressing the gender gap in computing and broadening participation of underrepresented groups.
  • Challenges in remote teaching and evaluations, including those related to meaningful engagement of students and fair assessments.
  • Experience of teaching large scale online courses in HPC and PDC across multiple geographies and student background

SUBMISSION GUIDELINES

Authors should submit papers in PDF format through the submission site ( https://easychair.org/conferences/?conf=eduhipc2024 )

We are accepting submissions for Track 1 Full Papers (6-8 pages) , Track 2 Short Papers (3-4 pages) , Posters (2-page abstracts) , and Peachy Parallel Assignments (2-page abstracts) .  Please see the details below for each category of submission. All entries must be submitted via the submission site ( https://easychair.org/conferences/?conf=eduhipc2024 ). Ensure that submissions adhere to the IEEE format https://www.ieee.org/conferences/publishing/templates.html), featuring single-spaced, double-column pages with proper inclusion of figures, tables, and references.

Accepted regular and short papers will be published in the workshop proceedings and included in the IEEE Xplore digital library, and authors will present their work in a technical workshop session. Authors of accepted Posters and Peachy Assignments will present their work during the workshop poster sessions. Summary papers of all accepted posters and all accepted Peachy Assignments will also be published in the workshop proceedings.  Proceedings of the workshops are distributed at the conference and will be included in the IEEE Xplore Digital Library after the conference.  Summary papers will be written by the Poster and Peachy Assignment chairs and will include, as co-authors, all Poster and Peachy Assignment authors.  In addition, all individual abstracts, posters, and preprints of papers will be published on the CDER website. 

Papers: Authors are asked to submit 6-8 page papers in pdf format for Track 1 and 3-4 page papers in pdf format for Track 2. Submissions will be reviewed based on the novelty of contributions, impact on the broader undergraduate curriculum, particularly on the core curriculum, relevance to the workshop’s goals, and, for experience papers, the results of their evaluation and the evaluation methodology. 

Posters: High-quality poster presentations are an integral part of EduHiPC. We seek posters (2-page abstracts) describing recent or ongoing research in PDC Education.  

Peachy Parallel Assignments: Course assignments are integral to student learning and also play an important role in student perceptions of the field. EduHiPC will include a session showcasing “Peachy Parallel Assignments” – high-quality assignments, previously tested in class, that are readily adoptable by other educators teaching topics in parallel and distributed computing.  Assignments may be previously published, but the author must have the right to publish a description of it and share all supporting materials. We are seeking assignments that are:

  • Tested – All submitted assignments should have been used successfully in a class.
  • Adoptable – Preference will be given to widely applicable and easy-to-adopt assignments.  Traits of such assignments include coverage of widely taught concepts, using common parallel languages and widely available hardware, having few prerequisites, and (with variations) being appropriate for different levels of students.
  • Cool and inspirational – We want assignments that excite students and encourage them to spend time with the material.  Ideally, they would be things that students want to show off to their roommates.

Assignments can cover any topics in Parallel and Distributed Computing. Preference will be given to assignments aimed at students in the early courses. Submissions (2-page abstracts) should describe the assignment and its contextual usage and include a link to a web page containing the complete set of files given to students (assignment description, supporting code, etc.). The document should cover the following items: What is the main idea of the assignment? What concepts are covered?  Who are its targeted students?  In what context have you used it?  What prerequisite material does it assume they have seen?  What are its strengths and weaknesses?  Are there any variations that may be of interest?  Authors of papers accepted as poster papers will be invited to revise their papers in a 2-page format. Authors of all accepted full and short papers must be present at the workshop.

IMPORTANT DATES - EduHiPC2024 Workshop

Submission site open:  September 1, 2024

Abstract Submission Deadline: September 16, 2024 (Encouraged)

Full Paper submissions Deadline:  September 23, 2024

Author notifications Deadline:  October 20, 2024

Camera-ready Deadline:  November 6, 2024

All deadlines are at 11:59 PM AoE (UTC-12).

Registration Fees for Accepted Papers

At least one author of an accepted paper must register and present the paper as per HiPC2024 Registration Rates.

ORGANIZATION COMMITTEE

Sushil Prasad, University of Texas, San Antonio, USA

Sheikh Ghafoor, Tennessee Tech University, USA

Alan Sussman, National Science Foundation & University of Maryland, USA

Ramachandran Vaidyanathan, Louisiana State University, USA

Charles Weems, University of Massachusetts, USA

Ashish Kuvelkar, C-DAC, India

Sharad Sinha, IIT Goa, India

Neelima Bayyapu, MIT, Manipal, India

Workshop Co-Chairs

Sushil K. Prasad, University of Texas San Antonio, USA, [email protected]

Ashish Kuvelkar, C-DAC, India, [email protected]

Program Co-Chairs

Sharad Sinha, IIT Goa, India, [email protected]

Neelima Bayyapu, MIT, Manipal, India, [email protected]

IMAGES

  1. Parallel vs. Distributed Computing: An Overview

    parallel and distributed computing research paper

  2. Parallel and Distributed Computing, Applications and Technologies

    parallel and distributed computing research paper

  3. Journal of Parallel and Distributed Computing template

    parallel and distributed computing research paper

  4. Parallel & Distributed Computing Report

    parallel and distributed computing research paper

  5. (PDF) Parallel Computing

    parallel and distributed computing research paper

  6. (PDF) THEORY OF DISTRIBUTED COMPUTING AND PARALLEL PROCESSING WITH ITS

    parallel and distributed computing research paper

VIDEO

  1. PDC Lecture 3: Parallel Computation Speedup and Amdahl's Law

  2. Parallel & Distributed Databases : Overview

  3. Distributed Computing Lecture 8: Architectures P2

  4. Parallel and Distributed Computing Lecture 1

  5. Parallel and Distributed Computing: Test Exam

  6. Parallel and Distributed Computing Lecture 7b

COMMENTS

  1. Journal of Parallel and Distributed Computing

    The publishes original research papers and timely review articles on the theory, design, evaluation, and use of parallel and/or distributed computing systems. The journal also features special issues on these topics; again covering the full range from the design to the use of our targeted systems. Research Areas Include:

  2. PDF Future Directions for Parallel and Distributed Computing

    Parallel and Distributed Computing is at the center of this progress in that it aggregates multiple computational resources, such as CPU cores and machines, ... This section briefly summarizes top-level recommendations for research in parallel and distributed computing. Subsequent sections of the report provide more detail on specific research ...

  3. Advances in parallel and distributed computing and its applications

    The selected papers of this special issue cover a variety of interesting topics reflecting some recent developments in theoretical and practical research in both core and interdisciplinary areas of parallel and distributed computing, applications, and technologies.

  4. Distributed Systems and Parallel Computing

    From our company's beginning, Google has had to deal with both issues in our pursuit of organizing the world's information and making it universally accessible and useful. We continue to face many exciting distributed systems and parallel computing challenges in areas such as concurrency control, fault tolerance, algorithmic efficiency, and ...

  5. PDF Journal of Parallel and Distributed Computing

    Y. Song, T. Wo, R. Yang et al. Journal of Parallel and Distributed Computing 157 (2021) 168-178 Fig. 1. Illustration of the unreliable network environment consisting of remote servers and edge nodes. We devise two distinct caching solutions to balance the computa-tion complexity and caching effectiveness and satisfy the diverse

  6. Journal of Parallel and Distributed Computing

    This special issue invites research manuscripts extended from those presented at IEEE International Confe-rence on High Performance Computing, Data, & Analytics (HiPC2020) which was held virtually, December 16--18, 2020. The accepted papers will cover traditional areas of the high-performance computing, data science and analytics domains as well as emerging topics in these domains.

  7. Journal of Parallel and Distributed Computing

    The Journal of Parallel and Distributed Computing publishes original research papers and timely review articles on the theory, design, evaluation, and use of parallel and/or distributed computing systems. The journal also features special issues on these topics; again covering the full range from the design to the use of our targeted systems ...

  8. Parallel and Distributed Computing: Algorithms and Applications

    Feature papers represent the most advanced research with significant potential for high impact in the field. ... It is an undeniable fact that parallel and distributed computing is ubiquitous now in nearly all computational scenarios ranging from mainstream computing to high-performance and/or distributed architectures such as cloud ...

  9. Journal of Parallel and Distributed Computing

    The Journal of Parallel and Distributed Computing publishes original research papers and timely review articles on the theory, design, evaluation, and use of parallel and/or distributed computing systems. The journal also features special issues on these topics; again covering the full range from the design to the use of our targeted systems ...

  10. Call for papers

    All manuscripts submission and review will be handled by Elsevier Editorial System Submission site for Journal of Parallel and Distributed Computing. All papers should be prepared according to Guide for Authors - Journal of Parallel and Distributed Computing . Manuscripts should be no longer than 40 double-spaced pages, not including the title ...

  11. Parallel and Distributed Computing

    Distributed systems and calculations being carried out in parallel | Explore the latest full-text research PDFs, articles, conference papers, preprints and more on PARALLEL AND DISTRIBUTED COMPUTING.

  12. PDF Journal of Parallel and Distributed Computing

    X. Xu, F. Wang, H. Jiang et al. Journal of Parallel and Distributed Computing 172 (2023) 51-68 Note that this paper is based on our prior work presented at the 2019 IEEE/ACM International Symposium on Quality of Service (IWQoS'19) [34]. We briefly provide the new contents beyond the prior conference version as follows.

  13. Introduction—Parallel and Distributed Computing

    THE research domains of parallel and distributed computing have a significant overlap. With the advent of general-purpose multiprocessors, this overlap is bound to increase. This Special Issue attempts to draw together several papers from both of these separate research domains to illustrate commonalty and to encourage greater interaction among researchers in the two communities.

  14. Towards a Scalable and Efficient PGAS-based Distributed OpenMP

    MPI+X has been the de facto standard for distributed memory parallel programming. It is widely used primarily as an explicit two-sided communication model, which often leads to complex and error-prone code. Alternatively, PGAS model utilizes efficient one-sided communication and more intuitive communication primitives. In this paper, we present a novel approach that integrates PGAS concepts ...

  15. PDF Efficient Parallel Computing for Machine Learning at Scale

    racy is lower than 80% when we scale the batch size to 8K. After a comprehensive tuning of the learning rate and warmup, we observe that the Sync method's accuracy is slightly lower than the EA. wild asynchronous method for batch s. ze = 8K (Figure 2.4). The Sync method uses eight machines. The Async EA-wil.

  16. (PDF) Distributed Computing: An Overview

    The differences in calculation distribution and parallel computing, along with terminology, assignment of tasks, performance parameters, benefits and range of distributed computing, and parallel ...

  17. Quantum Algorithms and Simulation for Parallel and Distributed Quantum

    A viable approach for building large-scale quantum computers is to interlink small-scale quantum computers with a quantum network to create a larger distributed quantum computer. When designing quantum algorithms for such a distributed quantum computer, one can make use of the added parallelization and distribution abilities inherent in the system. An added difficulty to then overcome for ...

  18. (PDF) PARALLEL AND DISTRIBUTED COMPUTING

    In this paper, we present a general survey on parallel computing. The main contents include parallel computer system which is the hardware platform of parallel computing, parallel algorithm which ...

  19. Guide for authors

    The Journal of Parallel and Distributed Computing publishes original research papers and timely review articles on the theory, design, evaluation, and use of parallel and/or distributed computing systems. The journal also features special issues on these topics; again covering the full range from the design to the use of our targeted systems ...

  20. Fast, Accurate and Distributed Simulation of novel HPC systems

    short-paper. Free access. Share on. Fast, Accurate and Distributed Simulation of novel HPC systems incorporating ARM and RISC-V CPUs ... HPDC '24: Proceedings of the 33rd International Symposium on High-Performance Parallel and Distributed Computing. June 2024. 436 pages. ISBN: 9798400704130. DOI: 10.1145/3625549. Chair: Patrizio Dazzi, Co ...

  21. PDF Journal of Parallel and Distributed Computing

    We present and analyze Graph_Disperse_BFS (Algorithm 4), which is a BFS-based algorithm that solves Dispersion of k ≤ n robots on an arbitrary n-node graph in O((D k) (D )) time with. + + algorithm has lower run-time than the O( O(log D logk) bits of memory at each robot using the global communication model. This. +.

  22. Research on Parallel Computing Teaching: state of the art and future

    This research full paper identifies how the teaching of parallel computing has been developing over the years. The learning of parallel and distributed computing is fundamental for computing professionals, due to the popularization of parallel architectures. Teaching parallel computing involves theoretical concepts and the development of practical skills. Its content is dense and comprises ...

  23. Parallel and Distributed Computing

    Instead, there will be required readings from on-line resources posted to the class schedule. In addition, we will also read and discuss one or two research papers most weeks. Research paper assignments will be posted to the class schedule and also be listed off the paper reading schedule (available week 2). The paper assigments will be updated ...

  24. Fuzzy Modelling Algorithms and Parallel Distributed Compensation for

    A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the ...

  25. Journal of Parallel and Distributed Computing

    Journal of Parallel and Distributed Computing. Supports open access. 10.3 CiteScore. 3.4 Impact Factor. Articles & Issues. About. Publish. Order journal. Menu. Articles & Issues. Latest issue; ... Research article Full text access TERMS: Task management policies to achieve high performance for mixed workloads using surplus resources. Jinyu Yu ...

  26. [2409.00876] Rapid GPU-Based Pangenome Graph Layout

    Computer Science > Distributed, Parallel, and Cluster Computing. arXiv:2409.00876 (cs) [Submitted on 2 Sep 2024] ... Pjotr Prins, Erik Garrison, Zhiru Zhang. View a PDF of the paper titled Rapid GPU-Based Pangenome Graph Layout, by Jiajie Li and 8 other authors. View PDF HTML (experimental)

  27. Top 10 Research Topics in Parallel and Distributed Computing

    The specific pressure in locations of the internet with concurrent enhancement in the availability of big data with several users has to accurate the computing tasks in parallel. Parallel and distributed computing will take place in several research areas such as networks, software engineering, computer science, computer architecture, operating systems, algorithms, etc.

  28. Game Programming Meets Parallel Computing: Teaching Parallel Computing

    With the rapid advancement of the gaming industry and the demand to create high-performance games, game programmers must be well-versed in concurrent programming and parallel computing concepts. This paper presents a set of project-based learning activities for teaching parallel computing concepts to game programmers.

  29. CFP

    In conjunction with the 31st IEEE International Conference on High-Performance Computing, Data, & Analytics . Call for Paper Submission. High Performance Computing (HPC) and, in general, Parallel and Distributed Computing (PDC) is ubiquitous. Every computing device, from a smartphone to a supercomputer, relies on parallel processing.