Charles Weems

Lecture 21

Monday, April 13, 2020 10:55 AM

Continuing our exploration of shared memory, today we saw why it’s not as simple as is often claimed. The idea of a programming abstraction in which all memory is accessible in uniform time simply doesn’t scale, for reasons of basic physics, and thus we end up having to use algorithmic problem solving techniques that take into consideration that variability, and also the growth in coherance management overhead that occurs unless we minimize our use of sharing. We also saw that parallel processing opens up a Pandora’s box of different algorithmic models, which makes it difficult to create a reusable code base. 

We had a discussion exercise to explore the pros and cons of full map, partial map, and chained map directory-based coherence systems. As we saw, the full map is simple, but limited with respect to scaling, partial map can scale more easily, but only if the degree of sharing is low, and chained map gives the most flexibility but is also the most complex, with the greatest overhead (again, it works better the less you use it). 

Please remember to send me email of your group’s discussion notes.

Slides are here

CmpSci 535