Concurrency0
Last updated
Was this helpful?
Last updated
Was this helpful?
[Link]
parallel workers disadvantages:
Shared State Can Get Complex
Stateless Workers
Job Ordering is Nondeterministic
This is also sometimes also referred to as a shared nothing concurrency model.
This is how job flow through assembly line system might look in reality.
Jobs may even be forwarded to more than one worker for concurrent processing. For instance, a job may be forwarded to both a job executor and a job logger. This diagram illustrates how all three assembly lines finish off by forwarding their jobs to the same worker (the last worker in the middle assembly line):
The assembly lines can get even more complex than this.
No Shared State
Stateful workers
Better Hardware Conformity
Job Ordering is Possible
The main disadvantage of the assembly line concurrency model is that the execution of a job is often spread out over multiple workers, and thus over multiple classes in your project. Thus it becomes harder to see exactly what code is being executed for a given job. It may also be harder to write the code.
As you can see, an application can be concurrent, but not parallel. This means that it processes more than one task at the same time, but the tasks are not broken down into subtasks.
An application can also be parallel but not concurrent. This means that the application only works on one task at a time, and this task is broken down into subtasks which can be processed in parallel.
Additionally, an application can be neither concurrent nor parallel. This means that it works on only one task at a time, and the task is never broken down into subtasks for parallel execution.
Finally, an application can also be both concurrent and parallel, in that it both works on multiple tasks at the same time, and also breaks each task down into subtasks for parallel execution. However, some of the benefits of concurrency and parallelism may be lost in this scenario, as the CPUs in the computer are already kept reasonably busy with either concurrency or parallelism alone. Combining it may lead to only a small performance gain or even performance loss. Make sure you analyze and measure before you adopt a concurrent parallel model blindly.
A critical section is a section of code that is executed by multiple threads and where the sequence of execution for the threads makes a difference in the result of the concurrent execution of the critical section.
The situation where two threads compete for the same resource, where the sequence in which the resource is accessed is significant, is called race conditions. A code section that leads to race conditions is called a critical section.
Race condition only occur when multiple threads update shared resources.
It can be avoid by proper thread synchronization in critical sections.
The ImmutableValue class is thread safe, but the use of it is not. Could use synchronized to handle it.
A local variable may be of a primitive type, in which case it is totally kept on the thread stack.
A local variable may also be a reference to an object. In that case the reference (the local variable) is stored on the thread stack, but the object itself if stored on the heap.
An object may contain methods and these methods may contain local variables. These local variables are also stored on the thread stack, even if the object the method belongs to is stored on the heap.
An object's member variables are stored on the heap along with the object itself. That is true both when the member variable is of a primitive type, and if it is a reference to an object.
Static class variables are also stored on the heap along with the class definition.
Only local variables are stored on the thread stack.
Visibility of thread updates(writes) to shared variables(use volatile keyword handle)
Race conditions when reading, checking and writing shared variables(use synchronized block).
synchronized
keyword can be used to mark four different types of blocks:Instance methods
Static methods
Code blocks inside instance methods
Code blocks inside static methods