• R/O
  • SSH

提交

標籤
無標籤

Frequently used words (click to add to your profile)

javac++androidlinuxc#windowsobjective-ccocoa誰得qtpythonphprubygameguibathyscaphec計画中(planning stage)翻訳omegatframeworktwitterdomtestvb.netdirectxゲームエンジンbtronarduinopreviewer

Commit MetaInfo

修訂3b76cc108f89a12dfd0b71a44e7c7f3407c600a9 (tree)
時間2022-10-02 21:31:29
作者Albert Mietus < albert AT mietus DOT nl >
CommiterAlbert Mietus < albert AT mietus DOT nl >

Log Message

Fix typos (by grammerly)

Change Summary

差異

diff -r 6b80b90e76fb -r 3b76cc108f89 CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst
--- a/CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst Sat Oct 01 13:10:18 2022 +0200
+++ b/CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst Sun Oct 02 14:31:29 2022 +0200
@@ -2,13 +2,13 @@
22
33 .. _ConcurrentComputingConcepts:
44
5-====================================
6-Concurrent Computing Concepts (BUSY)
7-====================================
5+=============================
6+Concurrent Computing Concepts
7+=============================
88
9-.. post::
9+.. post:: 2022/09/30
1010 :category: Castle DesignStudy
11- :tags: Castle, Concurrency, DRAFT
11+ :tags: Castle, Concurrency
1212
1313 Sooner as we realize, even embedded systems will have piles & heaps of cores, as I described in
1414 “:ref:`BusyCores`”. Castle should make it easy to write code for all of them: not to keep them busy, but to maximize
@@ -25,8 +25,8 @@
2525 Basic terminology
2626 =================
2727
28-Many theories are available, as are some more practical expertises, regrettably hardly non of them share a common
29-vocabulary. For that reason,I first describe some basic terms, and how they are used in these blogs. As always, we use Wikipedia
28+Many theories are available, as are some more practical expertise, regrettably hardly non of them share a common
29+vocabulary. For that reason, I first describe some basic terms, and how they are used in these blogs. As always, we use Wikipedia
3030 as common ground and add links for a deep dive.
3131 |BR|
3232 Again, we use ‘task’ as the most generic term for work-to-be-executed; that can be (in) a process, (on) a thread, (by) a
@@ -109,7 +109,7 @@
109109 bigger part.
110110 |BR|
111111 The big disadvantage of this model is that is hazardous: The programmer needs to insert Critical_Sections into his code
112-at all places that *variable* is used. Even a single acces to a shared variable, that is not protected by a
112+at all places that *variable* is used. Even a single access to a shared variable, that is not protected by a
113113 Critical-Section_, can (will) break the whole system [#OOCS]_.
114114
115115
@@ -200,7 +200,7 @@
200200 ---------------
201201
202202 Both the writer and the reader can be *blocking* (or not); which is a facet of the function-call. A blocking reader it
203-will always return when a message is available -- and will pauze until then. Equally, the write-call can block: pauze
203+will always return when a message is available -- and will pause until then. Equally, the write-call can block: pause
204204 until the message can be sent -- e.g. the reader is available (rendezvous) or a message buffer is free.
205205
206206 When the call is non-blocking, the call will return without waiting and yield a flag whether it was successful or not.
@@ -210,31 +210,31 @@
210210 Futures (or promises)
211211 ~~~~~~~~~~~~~~~~~~~~~
212212
213-A modern variant of non-blocking makes uses of “Futures_”. The call will always return this opaque data-structure
213+A modern variant of non-blocking makes use of “Futures_”. The call will always return this opaque data structure
214214 immediately. It may be a blank -- but the procedure can continue. Eventually, that data will be filled in “by the
215215 background”. It also contains a flag (like ``done``), so the programmer can check (using an if) [#future-CS]_ whether
216-the data is processes.
216+the data is processed.
217217
218218
219219 Uni/Bi-Directional, Many/Broad-cast
220220 -----------------------------------
221221
222-Message can be sent to one receiver, to many, or even to everybody. Usually this is modeled as an characteristic of the
223-channel. And at the same time, that channel can be used to send message in oneway, or in two-ways.
222+Messages can be sent to one receiver, to many, or even to everybody. Usually, this is modeled as a characteristic of the
223+channel. At the same time, that channel can be used to send messages in one or two directions.
224224
225-It depends on the context on the exact intent. By example in (TCP/IP) `networking, ‘Broadcasting’
225+It depends on the context of the exact intent. For example in (TCP/IP) `networking, ‘Broadcasting’
226226 <https://en.wikipedia.org/wiki/Broadcasting_(networking)>`__ (all not point-to-point variants) focus on reducing the
227227 amount of data on the network itself. In `distributed computing ‘Broadcasting’
228-<https://en.wikipedia.org/wiki/Broadcast_(parallel_pattern)>`__ is a parallel Design pattern. Whereas the `’Broadcast’
229-flag <https://en.wikipedia.org/wiki/Broadcast_flag>`_ in TV steaming is a complete other idea: is it allowed to save
230-(record) a TV broadcast...
228+<https://en.wikipedia.org/wiki/Broadcast_(parallel_pattern)>`__ is a parallel design pattern. Whereas the `‘Broadcast’
229+flag <https://en.wikipedia.org/wiki/Broadcast_flag>`_ in TV steaming is completely different: is it allowed to save
230+(record) a broadcast...
231231
232-We use those teams on the functional aim. We consider the above mentioned RCP connection as **Unidirectional** -- even
233-the channel can carry the answer. When both endpoints can take the initiative to sent messages, we call it
232+We use those teams with a functional aim. We consider the above-mentioned RCP connection as **Unidirectional** -- even
233+the channel can carry the answer. When both endpoints can take the initiative to send messages, we call it
234234 **Bidirectional**.
235235 |BR|
236236 With only 2 endpoints, we call the connection **Point-to-Point** (*p2p*). When more endpoints are concerned, it’s
237-**Broadcast** when a message is send to all other (on that channel), and **Manycast** when the user (the programmer) can
237+**Broadcast** when a message is sent to all others (on that channel), and **Manycast** when the user (the programmer) can
238238 (somehow) select a subset.
239239
240240
@@ -393,7 +393,7 @@
393393 There is no global state, no central synchronisation, no “shared memory”, and no (overall) orchestration. Everything is
394394 decentral.
395395
396-One can model many well-known software systems as an Actor-Model_: like email, SOAP, and other web services. Also
396+One can model many well-known software systems as an Actor-Model_: like email, SOAP, and other web services. Also,
397397 interrupt-handling can be modeled with actors: An extern message triggers the “*interrupt-handler* actor” --async of the
398398 main code; another *actor*-- which has to send data (aka a message) to the main actor.
399399
@@ -413,49 +413,49 @@
413413
414414 .. [#DistributedDiff]
415415 There a two (main) differences between Distributed-Computing_ and Multi-Core_. Firstly, all “CPUs” in
416- Distributed-Computing_ are active, independent and asynchronous. There is no option to share a “core” (as
416+ Distributed-Computing_ are active, independent, and asynchronous. There is no option to share a “core” (as
417417 commonly/occasionally done in Multi-process/Threaded programming); nor is there “shared memory” (one can only send
418418 messages over a network).
419419 |BR|
420- Secondly, collaboration with (network based) messages is a few orders slower then (shared) memory communication. This
421- makes it harder to speed-up; the delay of messaging shouldn't be bigger as the acceleration do doing thing in
420+ Secondly, collaboration with (network-based) messages is a few orders slower than (shared) memory communication. This
421+ makes it harder to speed up; the delay of messaging shouldn't be bigger than the acceleration when doing things in
422422 parallel.
423423 |BR|
424424 But that condition does apply to Multi-Core_ too. Although the (timing) numbers do differ.
425425
426426 .. [#wall-time]
427- As reminder: We speak about *CPU-time* when we count the cycles that a core us busy; so when a core is waiting, no
428- CPU-time is used. And we use *wall-time* when we time according the “the clock on the wall”.
427+ As a reminder: We speak about *CPU-time* when we count the cycles that make a core busy; so when a core is waiting, no
428+ CPU-time is used. And we use *wall-time* when we time according to “the clock on the wall”.
429429
430430 .. [#OOCS]
431431 The brittleness of Critical-Sections_ can be reduced by embedding (the) (shared-) variable in an OO abstraction. By
432- using *getters and *setters*, that controll the access, the biggest risk is (mostly) gone. That does not, however,
433- prevent deadlocks_ nor livelocks_.
432+ using *getters and *setters*, that control the access, the biggest risk is (mostly) gone. That does not, however,
433+ prevent deadlocks_ or livelocks_.
434434 |BR|
435- And still, all developers has be disciplined to use that abstraction ... always.
435+ And still, all developers have to be disciplined to use that abstraction ... *always*.
436436
437437 .. [#MPCS]
438- This is not completely correct; Message-Passing_ can be implemented on top of shared-memory. Then, the implementation
438+ This is not completely correct; Message-Passing_ can be implemented on top of shared memory. Then, the implementation
439439 of this (usually) OO-abstraction contains the Critical-Sections_; a bit as described in the footnote above.
440440
441441 .. [#timesCPU]
442442 And the overhead will grow when we add more cores. Firstly while more “others” have to wait (or spin), and secondly
443443 that the number of communications will grow with the number of cores too. As described in the :ref:`sidebar
444- <Threads-in-CPython>` in :ref:`BusyCores`, solving this can give more overhead then the speed we are aiming for.
444+ <Threads-in-CPython>` within :ref:`BusyCores` solving this can give more overhead than the speed we are aiming for.
445445
446446 .. [#future-CS]
447447 Remember: to be able to “fill in” that Future-object “by the background” some other thread or so is needed. And so, a
448448 Critical-Section_ is needed. For the SW-developer the interface is simple: read a flag (e.g. ``.done()``. But using
449- that to often can result is in a slow system.
449+ that too often can result in a slow system.
450450
451451 .. [#anycast]
452- Broadcasting_ is primarily know from “network messages”; where is has many variants -- mostly related to the
453- physical network abilities, and the need to save bandwith. As an abstraction, they can be used in “software messages”
452+ Broadcasting_ is primarily known for “network messages”; where it has many variants -- mostly related to the
453+ physical network abilities, and the need to save bandwidth. As an abstraction, they can be used in “software messages”
454454 (aka message passing) too.
455455
456456 .. [#bool-algebra]
457457 Those ‘rules’ resembles the boolean algebra, that most developers know: `NOT(x OR y) == NOT(x) AND NOT(y)`. See
458- wikipedia for examples on ACP_.
458+ Wikipedia for examples of ACP_.
459459
460460 .. _ACP: https://en.wikipedia.org/wiki/Algebra_of_communicating_processes
461461 .. _Actor-Model-Theory: https://en.wikipedia.org/wiki/Actor_model_theory
diff -r 6b80b90e76fb -r 3b76cc108f89 CCastle/2.Analyse/CCC-sidebar-CS.irst
--- a/CCastle/2.Analyse/CCC-sidebar-CS.irst Sat Oct 01 13:10:18 2022 +0200
+++ b/CCastle/2.Analyse/CCC-sidebar-CS.irst Sun Oct 02 14:31:29 2022 +0200
@@ -15,16 +15,16 @@
1515
1616 .. rubric:: Solve it by marking sections *‘exclusive’*.
1717
18- In essence, we have to tell the “computer” that a line (or a few lines) is *atomic*. To enforce the access exclusive,
19- the compiler will add some extra fundamental instructions (specific for that type of CPU) to assure this. A check is
20- inserted just before the section is entered, and the thread will be suspended when another task is using it. When
21- access is granted, a bit of bookkeeping is done -- so that the “check” in other threads will halt). That bookkeeping
22- is updated when leaving. Along with more bookkeeping to un-pause the suspended threads.
18+ Essentially, we need to tell the “computer” that a line (or a few lines) is *atomic*. To enforce the access is
19+ exclusive, the compiler will add some fundamental instructions (specific for that type of CPU) to assure this. A
20+ check is inserted just before the section, which can suspend the thread when another task is in the CS. When access
21+ is granted, a bit of bookkeeping is needed -- so that the “check” in other threads will halt). That bookkeeping is
22+ updated when leaving, along with more bookkeeping to un-pause the suspended threads.
2323
2424 .. rubric:: Complication: overhead!
2525
26- As you can imagen, this “bookkeeping” is extra complicated on a Multi-Core_ system; some global data structure is
27- needed; which is a Critical-Sections in itself.
26+ As you can imagine, this “bookkeeping” is extra complicated on a Multicore system; some global data structure is
27+ needed, which is a Critical-Section_ in itself.
2828 |BR|
2929 There are many algorithms to solve this. All with the same disadvantage: it takes a bit of time -- possible by
3030 “Spinlocking_” all other cores (for a few nanoseconds). As Critical-Sections a usually short (e.g. one assignment, or
diff -r 6b80b90e76fb -r 3b76cc108f89 CCastle/2.Analyse/CCC-sidebar-calc-demo.irst
--- a/CCastle/2.Analyse/CCC-sidebar-calc-demo.irst Sat Oct 01 13:10:18 2022 +0200
+++ b/CCastle/2.Analyse/CCC-sidebar-calc-demo.irst Sun Oct 02 14:31:29 2022 +0200
@@ -1,7 +1,7 @@
11 .. -*-rst-*-
22 included in `8.BusyCores-concepts.rst`
33
4-.. sidebar:: Some examples/Demo's
4+.. sidebar:: Some examples/Demos
55
66 .. tabs::
77