[NT 42591] Language:
簡体中文
English
繁體中文
[NT 5638] Help
[NT 5480] Login
[NT 59466] Create an account
[NT 5635] Back
[NT 59884] Switch To:
[NT 5556] Labeled
|
[NT 5559] MARC Mode
|
[NT 33762] ISBD
An introduction to parallel programming
[NT 42944] Record Type:
[NT 1579] Language materials, printed : [NT 40817] monographic
[NT 47261] Author:
PachecoPeter S.,
[NT 47351] Place of Publication:
Amsterdam
[NT 47263] Published:
Morgan Kaufmann;
[NT 47352] Year of Publication:
c2011
[NT 47264] Description:
xix, 370 p.ill. : 25 cm.;
[NT 47266] Subject:
Parallel programming (Computer science) -
[NT 50961] ISBN:
978-0-12-374260-5bound
[NT 50961] ISBN:
0-12-374260-9bound
[NT 60779] Content Note:
Machine generated contents note: 1 Why Parallel Computing1.1 Why We Need Ever-Increasing Performance 1.2 Why We're Building Parallel Systems 1.3 Why We Need to Write Parallel Programs 1.4 How Do We Write Parallel Programs? 1.5 What We'll Be Doing 1.6 Concurrent, Parallel, Distributed 1.7 The Rest of the Book 1.8 A Word of Warning 1.9 Typographical Conventions 1.10 Summary 1.11 Exercises 2 Parallel Hardware and Parallel Software2.1 Some Background 2.2 Modifications to the von Neumann Model 2.3 Parallel Hardware 2.4 Parallel Software 2.5 Input and Output 2.6 Performance 2.7 Parallel Program Design 2.8 Writing and Running Parallel Programs 2.9 Assumptions 2.10 Summary 2.11 Exercises 3 Distributed Memory Programming with MPI3.1 Getting Started 3.2 The Trapezoidal Rule in MPI 3.3 Dealing with I/O 3.4 Collective Communication 3.5 MPI Derived Datatypes 3.7 A Parallel Sorting Algorithm 3.8 Summary3.9 Exercises 3.10 Programming Assignments 4 Shared Memory Programming with Pthreads4.1 Processes, Threads and Pthreads 4.2 Hello, World4.3 Matrix-Vector Multiplication 4.4 Critical Sections 4.5 Busy-Waiting 4.6 Mutexes 4.7 Producer-Consumer Synchronization and Semaphores 4.8 Barriers and Condition Variables 4.9 Read-Write Locks 4.10 Caches, Cache-Coherence, and False Sharing 4.11 Thread-Safety 4.12 Summary 4.13 Exercises4.14 Programming Assignments 5 Shared Memory Programming with OpenMP5.1 Getting Started 5.2 The Trapezoidal Rule 5.3 Scope of Variables 5.4 The Reduction Clause 5.5 The Parallel For Directive 5.6 More About Loops in OpenMP: Sorting 5.7 Scheduling Loops 5.8 Producers and Consumers 5.9 Caches, Cache-Coherence, and False Sharing 5.10 Thread-Safety 5.11 Summary 5.12 Exercises 5.13 Programming Assignments 6 Parallel Program Development6.1 Two N-Body Solvers 6.2 Tree Search 6.3 A Word of Caution 6.4 Which API? 6.5 Summary 6.6 Exercises 6.7 Programming Assignments 7 Where to Go from Here
An introduction to parallel programming
Pacheco, Peter S.
An introduction to parallel programming
/ Peter S. Pacheco - Amsterdam : Morgan Kaufmann, c2011. - xix, 370 p. ; ill. ; 25 cm..
Machine generated contents note: 1 Why Parallel Computing1.1 Why We Need Ever-Increasing Performance 1.2 Why We're Building Parallel Systems 1.3 Why We Need to Write Parallel Programs 1.4 How Do We Write Parallel Programs? 1.5 What We'll Be Doing 1.6 Concurrent, Parallel, Distributed 1.7 The Rest of the Book 1.8 A Word of Warning 1.9 Typographical Conventions 1.10 Summary 1.11 Exercises 2 Parallel Hardware and Parallel Software2.1 Some Background 2.2 Modifications to the von Neumann Model 2.3 Parallel Hardware 2.4 Parallel Software 2.5 Input and Output 2.6 Performance 2.7 Parallel Program Design 2.8 Writing and Running Parallel Programs 2.9 Assumptions 2.10 Summary 2.11 Exercises 3 Distributed Memory Programming with MPI3.1 Getting Started 3.2 The Trapezoidal Rule in MPI 3.3 Dealing with I/O 3.4 Collective Communication 3.5 MPI Derived Datatypes 3.7 A Parallel Sorting Algorithm 3.8 Summary3.9 Exercises 3.10 Programming Assignments 4 Shared Memory Programming with Pthreads4.1 Processes, Threads and Pthreads 4.2 Hello, World4.3 Matrix-Vector Multiplication 4.4 Critical Sections 4.5 Busy-Waiting 4.6 Mutexes 4.7 Producer-Consumer Synchronization and Semaphores 4.8 Barriers and Condition Variables 4.9 Read-Write Locks 4.10 Caches, Cache-Coherence, and False Sharing 4.11 Thread-Safety 4.12 Summary 4.13 Exercises4.14 Programming Assignments 5 Shared Memory Programming with OpenMP5.1 Getting Started 5.2 The Trapezoidal Rule 5.3 Scope of Variables 5.4 The Reduction Clause 5.5 The Parallel For Directive 5.6 More About Loops in OpenMP: Sorting 5.7 Scheduling Loops 5.8 Producers and Consumers 5.9 Caches, Cache-Coherence, and False Sharing 5.10 Thread-Safety 5.11 Summary 5.12 Exercises 5.13 Programming Assignments 6 Parallel Program Development6.1 Two N-Body Solvers 6.2 Tree Search 6.3 A Word of Caution 6.4 Which API? 6.5 Summary 6.6 Exercises 6.7 Programming Assignments 7 Where to Go from Here.
Includes bibliographical references (p. 357-359) and index..
ISBN 978-0-12-374260-5ISBN 0-12-374260-9
Parallel programming (Computer science)
An introduction to parallel programming
LDR
:02505nam0 2200193 450
001
259979
005
20110411121039.0
010
1
$a
978-0-12-374260-5
$b
bound
$d
NT$2105
010
1
$a
0-12-374260-9
$b
bound
$d
NT$2105
100
$a
20120105d2011 m y0engy01 b
101
0
$a
eng
102
$a
nl
105
$a
a a 001zy
200
1
$a
An introduction to parallel programming
$f
Peter S. Pacheco
210
$a
Amsterdam
$d
c2011
$c
Morgan Kaufmann
215
1
$a
xix, 370 p.
$c
ill.
$d
25 cm.
320
$a
Includes bibliographical references (p. 357-359) and index.
327
0
$a
Machine generated contents note: 1 Why Parallel Computing1.1 Why We Need Ever-Increasing Performance 1.2 Why We're Building Parallel Systems 1.3 Why We Need to Write Parallel Programs 1.4 How Do We Write Parallel Programs? 1.5 What We'll Be Doing 1.6 Concurrent, Parallel, Distributed 1.7 The Rest of the Book 1.8 A Word of Warning 1.9 Typographical Conventions 1.10 Summary 1.11 Exercises 2 Parallel Hardware and Parallel Software2.1 Some Background 2.2 Modifications to the von Neumann Model 2.3 Parallel Hardware 2.4 Parallel Software 2.5 Input and Output 2.6 Performance 2.7 Parallel Program Design 2.8 Writing and Running Parallel Programs 2.9 Assumptions 2.10 Summary 2.11 Exercises 3 Distributed Memory Programming with MPI3.1 Getting Started 3.2 The Trapezoidal Rule in MPI 3.3 Dealing with I/O 3.4 Collective Communication 3.5 MPI Derived Datatypes 3.7 A Parallel Sorting Algorithm 3.8 Summary3.9 Exercises 3.10 Programming Assignments 4 Shared Memory Programming with Pthreads4.1 Processes, Threads and Pthreads 4.2 Hello, World4.3 Matrix-Vector Multiplication 4.4 Critical Sections 4.5 Busy-Waiting 4.6 Mutexes 4.7 Producer-Consumer Synchronization and Semaphores 4.8 Barriers and Condition Variables 4.9 Read-Write Locks 4.10 Caches, Cache-Coherence, and False Sharing 4.11 Thread-Safety 4.12 Summary 4.13 Exercises4.14 Programming Assignments 5 Shared Memory Programming with OpenMP5.1 Getting Started 5.2 The Trapezoidal Rule 5.3 Scope of Variables 5.4 The Reduction Clause 5.5 The Parallel For Directive 5.6 More About Loops in OpenMP: Sorting 5.7 Scheduling Loops 5.8 Producers and Consumers 5.9 Caches, Cache-Coherence, and False Sharing 5.10 Thread-Safety 5.11 Summary 5.12 Exercises 5.13 Programming Assignments 6 Parallel Program Development6.1 Two N-Body Solvers 6.2 Tree Search 6.3 A Word of Caution 6.4 Which API? 6.5 Summary 6.6 Exercises 6.7 Programming Assignments 7 Where to Go from Here
606
#
$a
Parallel programming (Computer science)
$3
151253
676
$a
005.275
700
1
$a
Pacheco
$b
Peter S.
$3
271747
801
0
$a
cw
$b
嶺東科技大學圖書館
$g
CCR
[NT 59758] based on 0 [NT 59757] review(s)
[NT 60002] ALL
總館A區6F
[NT 42818] Items
1 [NT 46296] records • [NT 5501] Pages 1 •
1
[NT 5000115] Inventory Number
[NT 7898] Location Name
[NT 7947] Item Class
[NT 33989] Material type
[NT 43385] Call number
[NT 5501238] Usage Class
[NT 45600] Loan Status
[NT 48088] No. of reservations
[NT 52971] Opac note
[NT 46641] Attachments
315708
總館A區6F
一般流通
一般圖書
005.275 P116
一般使用(Normal)
[NT 41737] On shelf
0
1 [NT 46296] records • [NT 5501] Pages 1 •
1
[NT 59725] Reviews
[NT 59886] Add a review
[NT 59885] and share your thoughts with other readers
Export
[NT 5501410] pickup library
[NT 42721] Processing
...
[NT 48336] Change password
[NT 5480] Login