• R/O
  • SSH

提交

標籤
無標籤

Frequently used words (click to add to your profile)

javac++androidlinuxc#windowsobjective-ccocoa誰得qtpythonphprubygameguibathyscaphec計画中(planning stage)翻訳omegatframeworktwitterdomtestvb.netdirectxゲームエンジンbtronarduinopreviewer

Commit MetaInfo

修訂1ef275296147c4435582ce90ab4e20f942f5c135 (tree)
時間2022-09-10 21:46:26
作者Albert Mietus < albert AT mietus DOT nl >
CommiterAlbert Mietus < albert AT mietus DOT nl >

Log Message

AsIs

Change Summary

差異

diff -r 1d3db89e375b -r 1ef275296147 CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst
--- a/CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst Sun Sep 04 14:15:48 2022 +0200
+++ b/CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst Sat Sep 10 14:46:26 2022 +0200
@@ -8,7 +8,7 @@
88
99 .. post::
1010 :category: Castle DesignStudy
11- :tags: Castle, Concurrency, DRAFT
11+ :tags: Castle, Concurrency, DRAFT§
1212
1313 Sooner as we may realize even embedded systems will have many, many cores; as I described in
1414 “:ref:`BusyCores`”. Castle should make it easy to write code for all of them: not to keep them busy, but to maximize
@@ -23,7 +23,7 @@
2323 efficiently. The exact syntax will come later.
2424
2525 Basic terminology
26-*****************
26+=================
2727
2828 There are many theories available and some more practical expertise but they hardly share a common vocabulary.
2929 For that reason, let’s describe some basic terms, that will be used in these blogs. As always, we use Wikipedia as common
@@ -35,7 +35,7 @@
3535 .. include:: CCC-sidebar-concurrency.irst
3636
3737 Concurrent
38-==========
38+----------
3939
4040 Concurrency_ is the **ability** to “compute” multiple *tasks* at the same time.
4141 |BR|
@@ -56,7 +56,7 @@
5656
5757
5858 Parallelism
59-===========
59+-----------
6060
6161 Parallelism_ is about executing multiple tasks (seemingly) at the same time. We will on focus running many multiple
6262 concurrent tasks (of the same program) on *“as many cores as possible”*. When we assume a thousand cores, we need a
@@ -71,16 +71,17 @@
7171
7272
7373 Distributed
74------------
74+~~~~~~~~~~~
7575
7676 A special form of parallelism is Distributed-Computing_: computing on many computers. Many experts consider this
7777 an independent field of expertise. Still --as Multi-Core_ is basically “many computers on a chip”-- it’s an
7878 available, adjacent [#DistributedDiff]_ theory, and we should use it, to design our “best ever language”.
7979
80+
8081 .. include:: CCC-sidebar-CS.irst
8182
82-Efficient Communication
83-***********************
83+Communication Efficiently
84+=========================
8485
8586 When multiple tasks run concurrently, they have to communicate to pass data and control progress. Unlike in a
8687 sequential program -- where the control is trivial, as is sharing data-- this needs a bit of extra effort.
@@ -94,7 +95,7 @@
9495
9596
9697 Shared Memory
97-=============
98+-------------
9899
99100 In this model all tasks (usually threads or processes) have some shared/common memory; typically “variables”. As the access
100101 is asynchronous, the risk exists the data is updated “at the same time” by two or more tasks. This can lead to invalid
@@ -110,7 +111,7 @@
110111
111112
112113 Messages
113-========
114+--------
114115
115116 A more modern approach is Message-Passing_: a task sends some information to another; this can be a message, some data,
116117 or an event. In all cases, there is a distinct sender and receiver -- and apparently no common/shared memory-- so no
@@ -133,16 +134,19 @@
133134 |BR|
134135 Notice: As the compiler will insert the (low level) Semaphores_, the risk that a developer forgets one is gone!
135136
137+.. _MPA:
136138
137139 Messaging Aspects
138------------------
140+=================
139141
140142 There are many variant on messaging, mostly combinations some fundamental aspects. Let mentions some basic ones.
143+|BR| In :ref:`MPA-examples` some existing messaging passing systems are classified in those therms, for those that do
144+prefer a more practical characterisation.
141145
142146 .. include:: CCC-sidebar-async.irst
143147
144148 (A)Synchronous
145-~~~~~~~~~~~~~~
149+--------------
146150
147151 **Synchronous** messages resembles normal function-calls. Typically a “question” is send, the call awaits the
148152 answer-messages, and that answer is returned. This can be seen as a layer on top of the more fundamental send/receive
@@ -159,7 +163,7 @@
159163
160164
161165 (Un)Buffered
162-~~~~~~~~~~~~
166+------------
163167
164168 Despide it’s is not truly a characteristic of the messages itself, messages can be *buffered*, or not. It is about
165169 piping, transporting the message: can this “connection” (see below) *contain/save/store* messages? When there is no
@@ -171,7 +175,7 @@
171175 Note: this is always asymmetric; messages need to be send before the can be read.
172176
173177 Connected Channels (or not)
174-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
178+---------------------------
175179
176180 Messages can be send over (pre-) *connected channels* or to freely addressable end-points. Some people use the term “connection
177181 oriented” for those connected-channels, others use the term “channel” more generic and for any medium that is
@@ -190,9 +194,8 @@
190194 number of channels).
191195
192196
193-
194197 (Non-) Blocking
195-~~~~~~~~~~~~~~~
198+---------------
196199
197200 Both the writer and the reader can be *blocking* (or not); which is a facet of the function-call. A blocking reader it
198201 will always return when a messages is available -- and will pauze until then.
@@ -205,16 +208,15 @@
205208 as well.
206209
207210
208-
209211 Uni/Bi-Directional, Broadcast
210-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
212+-----------------------------
211213
212214 Messages --or actually the channel [#channelDir]_ that transport them-- can be *unidirectional*: from sender to receiver only;
213215 *bidirectional*: both sides can send and receive; or *broadcasted*: one message is send to many receivers [#anycast]_.
214216
215217
216218 Reliability & Order
217-~~~~~~~~~~~~~~~~~~~
219+-------------------
218220
219221 Especially when studying “network messages”, we have to consider Reliability_ too. Many developers assume that a send
220222 message is always received and that when multiple messages are sent, they are received in the same order. In most
@@ -271,37 +273,13 @@
271273 Then, a *faster* conversation with a bit of noise is commonly preferred.a
272274
273275
274-Some examples
275--------------
276-
277-In the section below, we mention a few, everyday message-passing systems, to shed light on the theoretical features.
276+------------------------
278277
279-Pipes
280-~~~~~
281-
282-The famous *Unix Pipes* are unidirectional, reliable, blocking, asynchronous, buffered, non-networking **data-only**
283-messages. The (“stdout”) output of one process is fed as input to (one) other process. It’s data only, in one direction
284--- but the controll can in two directions: when the second (receiving) process can’t process the data (and the buffers
285-becoming full), the first process can be slowed down (although this a not well know feature).
286-
287-It’s also an example of a quite implicit channel: the programmer (of both programs) have nothing (to little) to do
288-extra, to make it possible.
278+.. todo:: All below is draft and needs work!!!!
289279
290280
291-
292-------------------------
293-
294-.. todo::
295-
296-
297- * Pipe : kind of data messages
298-
299-
300- .. todo:: All below is draft and needs work!!!!
301-
302-
303-Models
304-******
281+Process calculus
282+================
305283
306284 Probably the oldest model to described concurrency is the
307285 (all tokens move at the same timeslot) -- which is a hard to implement (efficiently) on Multi-Core_.
@@ -379,3 +357,4 @@
379357 .. _RPC: https://en.wikipedia.org/wiki/Remote_procedure_call
380358 .. _Broadcasting: https://en.wikipedia.org/wiki/Broadcasting_(networking)
381359 .. _Reliability: https://en.wikipedia.org/wiki/Reliability_(computer_networking)
360+.. _Process-Calculus: https://en.wikipedia.org/wiki/Process_calculus
diff -r 1d3db89e375b -r 1ef275296147 CCastle/2.Analyse/8b.short_MPA_examples.rst
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/CCastle/2.Analyse/8b.short_MPA_examples.rst Sat Sep 10 14:46:26 2022 +0200
@@ -0,0 +1,27 @@
1+.. _MPA-examples:
2+
3+Everyday Message Passing examples (ToDo)
4+========================================
5+
6+In :ref:`ConcurrentComputingConcepts` we have catalogued some :ref:`MPA` quite shortly. As a kind of addendum, we show a few well known message-passing systems, to shed some light on those theoretical features in this article.
7+
8+Pipes
9+=====
10+
11+The famous *Unix Pipes* are unidirectional, reliable, blocking, asynchronous, buffered, non-networking **data-only**
12+messages. The (“stdout”) output of one process is fed as input to (one) other process. It’s data only, in one direction
13+-- but the controll can in two directions: when the second (receiving) process can’t process the data (and the buffers
14+becoming full), the first process can be slowed down (although this a not well know feature).
15+
16+It’s also an example of a quite implicit channel: the programmer (of both programs) have nothing (to little) to do
17+extra, to make it possible.
18+
19+DDS
20+===
21+
22+E-Mail
23+======
24+
25+(BSD) Sockets
26+=============
27+