Jackey's 感悟

Do Research

Monthly Archives: 十月 2008

>【引】【二】Chrome的进程间通信

>

引自http://www.cnblogs.com/duguguiyu/archive/2008/10/04/1303695.html

1. Chrome进程通信的基本模式

进程间通信,叫做IPC(Inter-Process Communication),在Chrome不多的文档中,有一篇就是介绍这个的,在这里。Chrome最主要有三类进程,一类是Browser主进程,我们一直尊称它老人家为老大;还有一类是各个Render进程,前面也提过了;另外还有一类一直没说过,是Plugin进程,每一个插件,在Chrome中都是以进程的形式呈现,等到后面说插件的时候再提罢了。Render进程和Plugin进程都与老大保持进程间的通信,Render进程与Plugin进程之间也有彼此联系的通路,唯独是多个Render进程或多个Plugin进程直接,没有互相联系的途径,全靠老大协调。。。
进程与进程间通信,需要仰仗操作系统的特性,能玩的花着实不多,在Chrome中,用到的就是有名管道(Named Pipe), 只不过,它用一个IPC::Channel类,封装了具体的实现细节。Channel可以有两种工作模式,一种是Client,一种是 Server,Server和Client分属两个进程,维系一个共同的管道名,Server负责创建该管道,Client会尝试连接该管道,然后双发往 各自管道缓冲区中读写数据(在Chrome中,用的是二进制流,异步IO…),完成通信。。。
管道名字的协商
在 Socket中,我们会事先约定好通信的端口,如果不按照这个端口进行访问,走错了门,会被直接乱棍打出门去的。与之类似,有名管道期望在两个进程间游 走,就需要拿一个两个进程都能接受的进门暗号,这个就是有名管道的名字。在Chrome中(windows下…),有名管道的名字格式都是:\\. \pipe\chrome.ID。其中的ID,自然是要求独一无二,比如:进程ID.实例地址.随机数。通常,这个ID是由一个Process生成(往往 是Browser Process),然后在创建另一个进程的时候,作为命令行参数传进去,从而完成名字的协商。。。
如果不了解并期待了解有关Windows下有名管道和信号量的知识,建议去看一些专业的书籍,比如圣经级别的《Windows核心编程》和《深入解析Windows操作系统》,当然也可以去查看SDK,你需要了解的API可能包括:CreateNamedPipe, CreateFile, ConnectNamedPipe, WaitForMultipleObjects, WaitForSingleObject, SetEvent, 等等。。。
Channel中,有三个比较关键的角色,一个是Message::Sender,一个是Channel::Listener,最后一个是MessageLoopForIO::Watcher。 Channel本身派生自Sender和Watcher,身兼两角,而Listener是一个抽象类,具体由Channel的使用者来实现。顾名思 义,Sender就是发送消息的接口,Listener就是处理接收到消息的具体实现,但这个Watcher是啥?如果你觉得Watcher这东西看上去 很眼熟的话,我会激动的热泪盈眶的,没错,在前面(第一部分第一小节…)说消息循环的时候,从那个表中可以看到,IO线程(记住,在Chrome 中,IO指的是网络IO,*_*)的循环会处理注册了的Watcher。其实Watcher很简单,可以视为一个信号量和一个带有 OnObjectSignaled方法对象的对,当消息循环检测到信号量开启,它就会调用相应的OnObjectSignaled方法。。。

图5 Chrome的IPC处理流程图

一图解千语,如上图所示,整个Chrome最核 心的IPC流程都在图上了,期间,刨去了一些错误处理等逻辑,如果想看原汁原味的,可以自查Channel类的实现。当有消息被Send到一个发送进程的 Channel的时候,Channel会把它放在发送消息队列中,如果此时还正在发送以前的消息(发送端被阻塞…),则看一下阻塞是否解除(用一个等 待0秒的信号量等待函数…),然后将消息队列中的内容序列化并写道管道中去。操作系统会维护异步模式下管道的这一组信号量,当消息从发送进程缓冲区写 到接收进程的缓冲区后,会激活接收端的信号量。当接收进程的消息循环,循到了检查Watcher这一步,并发现有信号量激活了,就会调用该Watcher 相应的OnObjectSignaled方法,通知接受进程的Channel,有消息来了!Channel会尝试从管道中收字节,组消息,并调用 Listener来解析该消息。。。
从上面的描述不难看出,Chrome的进程通信,最核心的特点,就是利用消息循环来检查信号量,而不是直接让管道阻塞在某信号量上。这样就与其多线程模型紧密联系在了一起,用一种统一的模式来解决问题。并且,由于是消息循环统一检查,线程不会随便就被阻塞了,可以更好的处理各种其他工作,从理论上讲,这是通过增加CPU工作时间,来换取更好的体验,颇有资本家的派头。。。

温柔的消息循环
其实,Chrome的很多消息循环,也不是都那么霸道,也是会被阻塞在某些信号量或者某种场景上的,毕竟客户端不是它家的服务器,CPU不能被全部归在它家名下。。。
比如IO线程,当没有消息来到,又没有信号量被激活的时候,就会被阻塞,具体实现可以去看MessagePumpForIO的WaitForWork方法。。。
不过这种阻塞是集中式的,可随时修改策略的,比起Channel直接阻塞在信号量上,停工的时间更短。。。

2. 进程间的跨线程通信和同步通信

在Chrome中,任何底层的数据都是线程非安 全的,Channel不是太上老君(抑或中国足球?…),它也没有例外。在每一个进程中,只能有一个线程来负责操作Channel,这个线程叫做IO 线程(名不符实真是一件悲凉的事情…)。其它线程要是企图越俎代庖,是会出大乱子的。。。
但是有时候(其实是大部分时候…),我们需要从非IO线程与别的进程相通信,这该如何是好?如果,你有看过我前面写的线程模型,你一定可以想到,做法很简单,先将对Channel的操作放到Task中,将此Task放到IO线程队列里,让IO线程来处理即可。当然,由于这种事情发生的太频繁,每次都人肉做一次颇为繁琐,于是有一个代理类,叫做ChannelProxy,来帮助你完成这一切。。。
从接口上看,ChannelProxy的接口和 Channel没有大的区别(否则就不叫Proxy了…),你可以像用Channel一样,用ChannelProxy来Send你的消 息,ChannelProxy会辛勤的帮你完成剩余的封装Task等工作。不仅如此,ChannelProxy还青出于蓝胜于蓝,在这个层面上做了更多的 事情,比如:发送同步消息。。。
不过能发送同步消息的类不是ChannelProxy,而是它的子类,SyncChannel。在Channel那里,所有的消息都是异步的(在Windows中,也叫Overlapped…),其本身也不支持同步逻辑。为了实现同步,SyncChannel并没有另造轮子,而只是在Channel的层面上加了一个等待操作。当ChannelProxy的Send操作返回后,SyncChannel会把自己阻塞在一组信号量上,等待回包,直到永远或超时。从外表上看同步和异步没有什么区别,但在使用上还是要小心,在UI线程中使用同步消息,是容易被发指的。。。

3. Chrome中的IPC消息格式

说了半天,还有一个大头没有提过,那就是消息包。如果说,多线程模式下,对数据的访问开销来自于锁,那么在多进程模式下,大部分的额外开销都来自于进程间的消息拆装和传递。不论怎么样的模式,只要进程不同,消息的打包,序列化,反序列化,组包,都是不可避免的工作。。。
在Chrome中,IPC之间的通信消息,都是派生自IPC::Message类的。对于消息而言,序列化和反序列化是必须要支持的,Message的基类Pickle,就是干这个活的。Pickle提供了一组的接口,可以接受int,char,等等各种数据的输入,但是在Pickle内部,所有的一切都没有区别,都转化成了一坨二进制流这个二进制流是32位齐位的, 比如你只传了一个bool,也是最少占32位的,同时,Pickle的流是有自增逻辑的(就是说它会先开一个Buffer,如果满了的话,会加倍这个 Buffer…),使其可以无限扩展。Pickle本身不维护任何二进制流逻辑上的信息,这个任务交到了上级处理(后面会有说到…),但 Pickle会为二进制流添加一个头信息,这个里面会存放流的长度,Message在继承Pickle的时候,扩展了这个头的定义,完整的消息格式如下:
图6 Chrome的IPC消息格式

其中,黄色部分是包头,定长96个bit,绿色 部分是包体,二进制流,由payload_size指明长度。从大小上看这个包是很精简的了,除了routing位在消息不为路由消息的时候会有所浪费。 消息本身在有名管道中是按照二进制流进行传输的(有名管道可以传输两种类型的字符流,分别是二进制流和消息流…),因此由payload_size + 96bits,就可以确定是否收了一个完整的包。。。
从逻辑上来看,IPC消息分成两类,一类是路由 消息(routed message),还有一类是控制消息(control message)。路由消息是私密的有目的地的,系统会依照路由信息将消息安全的传递到目的地,不容它人窥视;控制消息就是一个广播消息,谁想听等能够听 得到。。。

消息的序列化
前不久读了Google Protocol Buffers的源码,是用在服务器端,用做内部机器通信协议的标准、代码生成工具和框架。它主要的思想是揉合了key/value的内容到二进制中,帮助生成更为灵活可靠的二进制协议。。。
在Chrome中,没有使用这套东西,而是用到了纯二进制流作为消息序列化的方式。我想这是由于应用场景不同使然。在服务端,我们更关心协议的稳定性,可 扩展性,并且,涉及到的协议种类很多。但在一个Chrome中,消息的格式很统一,这方面没有扩展性和灵活性的需求,而在序列化上,虽然 key/value的方式很好很强大,但是在Chrome中需要的不是灵活性而是精简性,因此宁可不用Protocol Buffers造好的轮子,而是另立炉灶,花了好一把力气提供了一套纯二进制的消息机制。。。

4. 定义IPC消息

如果你写过MFC程序,对MFC那里面一大堆宏 有所忌惮的话,那么很不幸,在Chrome中的IPC消息定义中,你需要再吃一点苦头了,甚至,更苦大仇深一些;如果你曾经领教过用模板的特化偏特化做 Traits、用模板做函数重载、用编译期的Tuple做变参数支持,之类机制的种种麻烦的话,那么,同样很遗憾,在Chrome中,你需要再感受一 次。。。
不过,先让我们忘记宏和模板,看人肉一个消息,到底需要哪些操作。一个标准的IPC消息定义应该是类似于这样的:

class SomeMessage

: public IPC::Message
{
public:
enum { ID = …; }
SomeMessage(SomeType & data)
: IPC::Message(MSG_ROUTING_CONTROL, ID, ToString(data))
{…}
};
大概意思是这样的,你需要从Message(或者其他子类)派生出一个子类,该子类有一个独一无二的ID值,该子类接受一个参数,你需要对这个参数进行序列化。两个麻烦的地方看的很清楚,如果生成独一无二的ID值?如何更方便的对任何参数可以自动的序列化?。。。
在Chrome中,解决这两个问题的答案,就是 宏 + 模板。Chrome为每个消息安排了一种ID规格,用一个16bits的值来表示,高4位标识一个Channel,低12位标识一个消息的子id,也就是 说,最多可以有16种Channel存在不同的进程之间,每一种Channel上可以定义4k的消息。目前,Chrome已经用掉了8种 Channel(如果A、B进程需要双向通信,在Chrome中,这是两种不同的Channel,需要定义不同的消息,也就是说,一种双向的进程通信关 系,需要耗费两个Channel种类…),他们已经觉得,16bits的ID格式不够用了,在将来的某一天,估计就被扩展成了32bits的。书归正 传,Chrome是这么来定义消息ID的,用一个枚举类,让它从高到低往下走,就像这样:

enum SomeChannel_MsgType

{
SomeChannelStart = 5 <

SomeChannelPreStart = (5 <

Msg1,
Msg2,
Msg3,
MsgN,
SomeChannelEnd
};

这是一个类型为5的Channel的消息ID声 明,由于指明了最开始的两个值,所以后续枚举的值会依次递减,如此,只要维护Channel类型的唯一性,就可以维护所有消息ID的唯一性了(当然,前提 是不能超过消息上限…)。但是,定义一个ID还不够,你还需要定义一个使用该消息ID的Message子类。这个步骤不但繁琐,最重要的,是违反了 DIY原则,为了添加一个消息,你需要在两个地方开工干活,是可忍孰不可忍,于是Google祭出了宏这颗原子弹,需要定义消息,格式如下:
IPC_BEGIN_MESSAGES(PluginProcess, 3)
IPC_MESSAGE_CONTROL2(PluginProcessMsg_CreateChannel,
int /* process_id */,
HANDLE /* renderer handle */)
IPC_MESSAGE_CONTROL1(PluginProcessMsg_ShutdownResponse,
bool /* ok to shutdown */)
IPC_MESSAGE_CONTROL1(PluginProcessMsg_PluginMessage,
std::vector /* opaque data */)
IPC_MESSAGE_CONTROL0(PluginProcessMsg_BrowserShutdown)
IPC_END_MESSAGES(PluginProcess)
这是Chrome中,定义 PluginProcess消息的宏,我挖过来放在这了,如果你想添加一条消息,只需要添加一条类似与IPC_MESSAGE_CONTROL0东东即 可,这说明它是一个控制消息,参数为0个。你基本上可以这样理解,IPC_BEGIN_MESSAGES就相当于完成了一个枚举开始的声明,然后中间的每 一条,都会在枚举里面增加一个ID,并声明一个子类。这个一宏两吃,直逼北京烤鸭两吃的高超做法,可以参看ipc_message_macros.h,或 者看下面一宏两吃的一个举例。。。

多次展开宏的技巧
这是Chrome中用到的一个技巧,定义一次宏,展开多段代码,我孤陋寡闻,第一次见,一个类似的例子,如下:
首先,定义一个macro.h,里面放置宏的定义:

#undef SUPER_MACRO

#if defined(FIRST_TIME)

#undef FIRST_TIME

#define SUPER_MACRO(label, type) \

enum IDs { \

label##__ID = 10 \

};

#elif defined(SECOND_TIME)

#undef SECOND_TIME

#define SUPER_MACRO(label, type) \

class TestClass \

{ \

public: \

enum {ID = label##__ID}; \

TestClass(type value) : _value(value) {} \

type _value; \

};

#endif

可以看到,这个头文件是可重入的,每一次先undef掉之前的定义,然后判断进行新的定义。然后,你可以创建一个use_macro.h文件,利用这个宏,定义具体内容:

#include “macros.h”

SUPER_MACRO(Test, int)

这个头文件在利用宏的部分不需要放到ifundef…define…这样的头文件保护中,目的就是为了可重入。在主函数中,你可以多次define + include,实现多次展开的目的:

#define FIRST_TIME

#include “use_macro.h”

#define SECOND_TIME

#include “use_macro.h”

#include

int _tmain(int argc, _TCHAR* argv[])

{

TestClass t(5);

std::cout <

std::cout <

return 0;

}

这样,你就成功的实现,一次定义,生成多段代码了。。。

此外,当接收到消息后,你还需要处理消息。接收消息的函数,是IPC::Channel::Listener子类的OnMessageReceived函数。在这个函数中,会放置一坨的宏,这一套宏,一定能让你想起MFC的Message Map机制:

IPC_BEGIN_MESSAGE_MAP_EX(RenderProcessHost, msg, msg_is_ok)

IPC_MESSAGE_HANDLER(ViewHostMsg_PageContents, OnPageContents)
IPC_MESSAGE_HANDLER(ViewHostMsg_UpdatedCacheStats,
OnUpdatedCacheStats)
IPC_MESSAGE_UNHANDLED_ERROR()
IPC_END_MESSAGE_MAP_EX()
这个东西很简单,展开后基本可以视为一个Switch循环,判断消息ID,然后将消息,传递给对应的函数。与MFC的Message Map比起来,做的事情少多了。。。

通过宏的手段,可以解决消息类声明和消息的分发 问题,但是自动的序列化还不能支持(所谓自动的序列化,就是不论你是什么类型的参数,几个参数,都可以直接序列化,不需要另写代码…)。在C++这种 语言中,所谓自动的序列化,自动的类型识别,自动的XXX,往往都是通过模板来实现的。这些所谓的自动化,其实就是通过事前的大量人肉劳作,和模板自动递 推来实现的,如果说.Net或Java中的自动序列化是过山轨道,这就是那挑夫的骄子,虽然最后都是两腿不动到了山顶,这底下费得力气真是天壤之别啊。具 体实现技巧,有兴趣的看看《STL源码剖析》,或者是《C++新思维》,或者Chrome中的ipc_message_utils.h,这要说清楚实在不 是一两句的事情。。。
总之通过宏和模板,你可以很简单的声明一个消 息,这个消息可以传入各式各样的参数(这里用到了夸张的修辞手法,其实,只要是模板实现的自动化,永远都是有限制的,在Chrome的模板实现中,参数数 量不要超过5个,类型需要是基本类型、STL容器等,在不BT的场合,应该够用了…),你可以调用Channel、ChannelProxy、 SyncChannel之类的Send方法,将消息发送给其他进程,并且,实现一个Listener类,用Message Map来分发消息给对应的处理函数。如此,整个IPC体系搭建完成。。。

苦力的宏和模板
不论是宏还是模板,为了实现这套机制,都需要写大量的类似代码,比如为了支持0~N个参数的Control消息,你就需要写N+1个类似的宏;为了支持各种基础数据结构的序列化,你就需要写上十来个类似的Write函数和Traits。。。
之所以做如此苦力的活,都是为了用这些东西的人能够尽可能的简单方便,符合DIY原则。规约到之前说的设计者的职责上来,这是一个典型的苦了我一个幸福千 万人的负责任的行为。在Chrome中,如此的代码随处可见,光Tuple那一套拳法,我现在就看到了使了不下三次(我曾经做过一套,直接吐血…), 如此兢兢业业,真是可歌可泣啊。。。

>【引】Chrome源码剖析 【序】 && 【一】

>

转自http://www.cnblogs.com/duguguiyu/archive/2008/10/02/1303095.html

【序】

开源是口好东西,它让这个充斥着大量工业垃圾代码和教材玩具代码的行业,多了一些艺术气息和美的潜质。它使得每个人,无论你来自米国纽约还是中国铁岭,都有机会站在巨人的肩膀上,如果不能,至少也可以抱一把大腿。。。

现在我就是来抱大腿的,这条粗腿隶属于 Chrome(开源项目名称其实是Chromium,本来Chrome这个名字就够晦涩了,没想到它的本名还更上一层楼…),Google那充满狼子 野心的浏览器。每一个含着金勺子出生的人都免不了被仰慕并被唾骂,Chrome也不例外。关于Chrome的优劣好坏讨论的太多了,基本已经被嚼成甘蔗渣 了,没有人愿意再多张一口了。俗话说,内行看门道外行看热闹,大部分所谓的外行,是通过使用的真实感受来评定优劣的,这无疑是最好的方式。但偏偏还是有自 诩的内行,喜欢说内行话办外行事,一看到Chrome用到多进程就说垃圾废物肯定低能。拜托,大家都是搞技术的,你知道多进程的缺点,Google也知 道,他们不是政客,除了搞个噱头扯个蛋就一无所知了,人家也是有脸有皮的,写一坨屎一样的开源代码放出来遭世人耻笑难道会很开心?所谓技术的优劣,是不能 一概而论的,同样的技术在不同场合不同环境不同代码实现下,效果是有所不同的。既然Chrome用了很多看上去不是很美的技术,我们是不是也需要了解一下 它为什么要用,怎么用的,然后再开口说话?(恕不邀请,请自行对号入座…)。。。
人说是骡子是马拉出来遛遛,Google已经把 Chrome这匹驴子拉到了世人面前,大家可以随意的遛。我们一直自诩是搞科学的,就是在努力和所谓的艺术家拉开,人搞超女评委的,可以随意塞着屁眼用嘴 放屁,楞把李天王说是李天后,你也只能说他是艺术品位独特。你要搞科学就不行,说的不对,轻的叫无知,重的叫学术欺诈,结果一片惨淡。所以,既然代码都有 了,再说话,就只能当点心注点意了,先看,再说。。。
我已经开始遛Chrome这头驴了,确切一点, 是头壮硕的肥驴,项目总大小接近2G。这样的庞然大物要从头到脚每个毛孔的大量一遍,那估计不咽气也要吐血的,咱又不是做Code review,不需要如此拼命。每一个好的开源项目,都像是一个美女,这世界没有十全十美的美女,自然也不会有样样杰出的开源项目。每个美女都有那么一两 点让你最心动不已或者倍感神秘的,你会把大部分的注意力都放在上面细细品味,看开源,也是一样。Chrome对我来说,有吸引力的地方在于(排名分先 后…):

1. 它是如何利用多进程(其实也会有多线程一起)做并发的,又是如何解决多进程间的一些问题的,比如进程间通信,进程的开销;
2. 做为一个后来者,它的扩展能力如何,如何去权衡对原有插件的兼容,提供怎么样的一个插件模型;
3. 它的整体框架是怎样,有没有很NB的架构思想;
4. 它如何实现跨平台的UI控件系统;
5. 传说中的V8,为啥那么快。

但Chrome是一个跨平台的浏览器,其Linux和Mac版本正在开发过程中,所以我把所有的眼光都放在了windows版本中,所有的代码剖析都是基于windows版本的。话说,我本是浏览器新手、win api白痴以及并发处理的火星人,为了我的好奇投身到这个溜驴的行业中来,难免有学的不到位看的走眼的时候,各位看官手下超生,有错误请指正,实在看不下去,回家自己牵着遛吧。。。
扯淡实在是个体力活,所以后面我会少扯淡多说问题。。。
关于Chrome的源码下载和环境配置,大家看这里(windows版本),只想强调一点,一定要严格按照说明来配置环境,特别是vs2005的补丁和windows SDK的安装,否则肯定是编译不过的。。。
最后,写这部分唯一不是废话的内容,请记住以下这幅图,这是Chrome最精华的一个缩影,如果你还有空,一定要去这里进行阅读,其中重中之重是这一篇。。。
图1 Chrome的线程和进程模型

【一】 Chrome的多线程模型

0. Chrome的并发模型

如果你仔细看了前面的图,对Chrome的线程和进程框架应该有了个基本的了解。Chrome有一个主进程,称为Browser进程,它是老大,管理 Chrome大部分的日常事务;其次,会有很多Renderer进程,它们圈地而治,各管理一组站点的显示和通信(Chrome在宣传中一直宣称一个 tab对应一个进程,其实是很不确切的…),它们彼此互不搭理,只和老大说话,由老大负责权衡各方利益。它们和老大说话的渠道,称做 IPC(Inter-Process Communication),这是Google搭的一套进程间通信的机制,基本的实现后面自会分解。。。

Chrome的进程模型
Google在宣传的时候一直都说,Chrome是one tab one process的模式,其实,这只是为了宣传起来方便如是说而已,基本等同广告,实际疗效,还要从代码中来看。实际上,Chrome支持的进程模型远比宣传丰富,你可以参考一下
这里 ,简单的说,Chrome支持以下几种进程模型:

  1. Process-per-site-instance:就是你打开一个网站,然后从这个网站链开的一系列网站都属于一个进程。这是Chrome的默认模式。
  2. Process-per-site: 同域名范畴的网站放在一个进程,比如www.google.com和www.google.com/bookmarks就属于一个域名内(google有 自己的判定机制),不论有没有互相打开的关系,都算作是一个进程中。用命令行–process-per-site开启。
  3. Process-per-tab:这个简单,一个tab一个process,不论各个tab的站点有无联系,就和宣传的那样。用–process-per-tab开启。
  4. Single Process:这个很熟悉了吧,传统浏览器的模式,没有多进程只有多线程,用–single-process开启。

关于各种模式的优缺点,官方有官方的说法,大家自己也会有自己的评述。不论如何,至少可以说明,Google不是由于白痴而采取多进程的策略,而是实验出来的效果。。。
大家可以用Shift+Esc观察各模式下进程状况,至少我是观察失败了(每种都和默认的一样…),原因待跟踪。。。

不论是Browser进程还是Renderer 进程,都不只是光杆司令,它们都有一系列的线程为自己打理各种业务。对于Renderer进程,它们通常有两个线程,一个是Main thread,它负责与老大进行联系,有一些幕后黑手的意思;另一个是Render thread,它们负责页面的渲染和交互,一看就知道是这个帮派的门脸级人物。相比之下,Browser进程既然是老大,小弟自然要多一些,除了大脑般的 Main thread,和负责与各Renderer帮派通信的IO thread,其实还包括负责管文件的file thread,负责管数据库的db thread等等(一个更详细的列表,参见这里),它们各尽其责,齐心协力为老大打拼。它们和各Renderer进程的之间的关系不一样,同一个进程内的线程,往往需要很多的协同工作,这一坨线程间的并发管理,是Chrome最出彩的地方之一了。。。

闲话并发
单进程单线程的编程是最惬意的事情,所看即所得,一维的思考即可。但程序员的世界总是没有那么美好,在很多的场合,我们都需要有多线程、多进程、多机器携起手来一齐上阵共同完成某项任务,统称:并发(非官方版定义…)。在我看来,需要并发的场合主要是要两类:

  1. 为了更好的用户体验。 有的事情处理起来太慢,比如数据库读写、远程通信、复杂计算等等,如果在一个线程一个进程里面来做,往往会影响用户感受,因此需要另开一个线程或进程转到 后台进行处理。它之所以能够生效,仰仗的是单CPU的分时机制,或者是多CPU协同工作。在单CPU的条件下,两个任务分成两拨完成的总时间,是大于两个 任务轮流完成的,但是由于彼此交错,更人的感觉更为的自然一些。
  2. 为了加速完成某项工作。 大名鼎鼎的Map/Reduce,做的就是这样的事情,它将一个大的任务,拆分成若干个小的任务,分配个若干个进程去完成,各自收工后,在汇集在一起,更 快的得到最后的结果。为了达到这个目的,只有在多CPU的情形下才有可能,在单CPU的场合(单机单CPU…),是无法实现的。

在第二种场合下,我们会自然而然的关注数据的分离,从而很好的利用上多CPU的能力;而在第一种场合,我们习惯了单CPU的模式,往往不注重数据与行为的对应关系,导致在多CPU的场景下,性能不升反降。。。

1. Chrome的线程模型

仔细回忆一下我们大部分时候是怎么来用线程的, 在我足够贫瘠的多线程经历中,往往都是这样用的:起一个线程,传入一个特定的入口函数,看一下这个函数是否是有副作用的(Side Effect),如果有,并且还会涉及到多线程的数据访问,仔细排查,在可疑地点上锁伺候。。。
Chrome的线程模型走的是另一个路子,即,极力规避锁的存在。 换更精确的描述方式来说,Chrome的线程模型,将锁限制了极小的范围内(仅仅在将Task放入消息队列的时候才存在…),并且使得上层完全不需要 关心锁的问题(当然,前提是遵循它的编程模型,将函数用Task封装并发送到合适的线程去执行…),大大简化了开发的逻辑。。。
不过,从实现来说,Chrome的线程模型并没有什么神秘的地方(美女嘛,都是穿衣服比不穿衣服更有盼头…),它用到了消息循环的手段。每一个Chrome的线程,入口函数都差不多,都是启动一个消息循环(参见MessagePump类),等待并执行任务。而其中,唯一的差别在于,根据线程处理事务类别的不同,所起的消息循环有所不同。比如处理进程间通信的线程(注意,在Chrome中,这类线程都叫做IO线程,估计是当初设计的时候谁的脑门子拍错了…)启用的是MessagePumpForIO类,处理UI的线程用的是MessagePumpForUI类,一般的线程用到的是MessagePumpDefault类 (只讨论windows, windows, windows…)。不同的消息循环类,主要差异有两个,一是消息循环中需要处理什么样的消息和任务,第二个是循环流程(比如是死循环还是阻塞在某信 号量上…)。下图是一个完整版的Chrome消息循环图,包含处理Windows的消息,处理各种Task(Task是什么,稍后揭晓,敬请期 待…),处理各个信号量观察者(Watcher),然后阻塞在某个信号量上等待唤醒。。。
图2 Chrome的消息循环

当然,不是每一个消息循环类都需要跑那么一大圈的,有些线程,它不会涉及到那么多的事情和逻辑,白白浪费体力和时间,实在是不可饶恕的。因此,在实现中,不同的MessagePump类,实现是有所不同的,详见下表:

MessagePumpDefault MessagePumpForIO MessagePumpForUI
是否需要处理系统消息
是否需要处理Task
是否需要处理Watcher
是否阻塞在信号量上

2. Chrome中的Task

从上面的表不难看出,不论是哪一种消息循环,必须处理的,就是Task(暂且遗忘掉系统消息的处理和Watcher,以后,我们会缅怀它们的…)。刨去其它东西的干扰,只留下Task的话,我们可以这样认为:Chrome中的线程从实现层面来看没有任何区别,它的区别只存在于职责层面,不同职责的线程,会处理不同的Task。最后,在铺天盖地西红柿来临之前,我说一下啥是Task。。。
简单的看,Task就是一个类,一个包含了 void Run()抽象方法的类(参见Task类…)。一个真实的任务,可以派生Task类,并实现其Run方法。每个MessagePump类中,会有一个 MessagePump::Delegate的类的对象(MessagePump::Delegate的一个实现,请参见MessageLoop 类…),在这个对象中,会维护若干个Task的队列。当你期望,你的一个逻辑在某个线程内执行的时候,你可以派生一个Task,把你的逻辑封装在 Run方法中,然后实例一个对象,调用期望线程中的PostTask方法,将该Task对象放入到其Task队列中去,等待执行。我知道很多人已经抄起了 板砖,因为这种手法实在是太常见了,就不是一个简单的依赖倒置,在线程池,Undo\Redo等模块的实现中,用的太多了。。。
但,我想说的是,虽说谁家过年都是吃顿饺子,这饺子好不好吃还是得看手艺,不能一概而论。在Chrome中,线程模型是统一且唯一的,这就相当于有了一套标准,它需要满足在各个线程上执行的几十上百种任务的需求,因此,必须在灵活行和易用性上有良好的表现,这就是设计标准的难度。为了满足这些需求,Chrome在底层库上做了足够的功夫:
  1. 它提供了一大套的模板封装(参见task.h),可以将Task摆脱继承结构、函数名、函数参数等限制(就是基于模板的伪function实现,想要更深入了解,建议直接看鼻祖《Modern C++》和它的Loki库…);
  2. 同时派生出CancelableTask、ReleaseTask、DeleteTask等子类,提供更为良好的默认实现;
  3. 在消息循环中,按逻辑的不同,将Task又分成即时处理的Task、延时处理的Task、Idle时处理的Task,满足不同场景的需求;
  4. Task派生自tracked_objects::Tracked,Tracked是为了实现多线程环境下的日志记录、统计等功能,使得Task天生就有良好的可调试性和可统计性;
这一套七荤八素的都搭建完,这才算是一个完整的Task模型,由此可知,这饺子,做的还是很费功夫的。。。

3. Chrome的多线程模型

工欲善其事,必先利其器。Chrome之所以费了老鼻子劲去磨底层框架这把刀,就是为了面对多线程这坨怪兽的时候杀的更顺畅一些。在Chrome的多线程模型下,加锁这个事情只发生在将Task放入某线程的任务队列中,其他对任何数据的操作都不需要加锁。当然,天下没有免费的午餐,为了合理传递Task,你需要了解每一个数据对象所管辖的线程,不过这个事情,与纷繁的加锁相比,真是小儿科了不知道多少倍。。。
图3 Task的执行模型

如果你熟悉设计模式,你会发现这是一个Command模式, 将创建于执行的环境相分离,在一个线程中创建行为,在另一个线程中执行行为。Command模式的优点在于,将实现操作与构造操作解耦,这就避免了锁的问 题,使得多线程与单线程编程模型统一起来,其次,Command还有一个优点,就是有利于命令的组合和扩展,在Chrome中,它有效统一了同步和异步处理的逻辑。。。

Command模式
Command 模式,是一种看上去很酷的模式,传统的面向对象编程,我们封装的往往都是数据,在Command模式下,我们希望封装的是行为。这件事在函数式编程中很正 常,封装一个函数作为参数,传来传去,稀疏平常的事儿;但在面向对象的编程中,我们需要通过继承、模板、函数指针等手法,才能将其实现。。。
应用Command模式,我们是期望这个行为能到一个不同于它出生的环境中去执行,简而言之,这是一种想生不想养的行为。我们做Undo/Redo的时 候,会把在任一一个环境中创建的Command,放到一个队列环境中去,供统一的调度;在Chrome中,也是如此,我们在一个线程环境中创建了 Task,却把它放到别的线程中去执行,这种寄居蟹似的生活方式,在很多场合都是有用武之地的。。。

在一般的多线程模型中,我们需要分清楚啥是同步 啥是异步,在同步模式下,一切看上去和单线程没啥区别,但同时也丧失了多线程的优势(沦落成为多线程串行…)。而如果采用异步的模式,那写起来就麻烦 多了,你需要注册回调,小心管理对象的生命周期,程序写出来是嗷嗷恶心。在Chrome的多线程模型下,同步和异步的编程模型区别就不复存在了,如果是这 样一个场景:A线程需要B线程做一些事情,然后回到A线程继续做一些事情;在Chrome下你可以这样来做:生成一个Task,放到B线程的队列中,在该 Task的Run方法最后,会生成另一个Task,这个Task会放回到A的线程队列,由A来执行。如此一来,同步异步,天下一统,都是Task传来传 去,想不会,都难了。。。
图4 Chrome的一种异步执行的解决方案

4. Chrome多线程模型的优缺点

一直在说Chrome在规避锁的问题,那到底锁 是哪里不好,犯了何等滔天罪责,落得如此人见人嫌恨不得先杀而后快的境地。《代码之美》的第二十四章“美丽的并发”中,Haskell设计人之一的 Simon Peyton Jones总结了一下用锁的困难之处,我罚抄一遍,如下:
  1. 锁少加了,导致两个线程同时修改一个变量;
  2. 锁多加了,轻则妨碍并发,重则导致死锁;
  3. 锁加错了,由于锁和需要锁的数据之间的联系,只存在于程序员的大脑中,这种事情太容易发生了;
  4. 加锁的顺序错了,维护锁的顺序是一件困难而又容易出错的问题;
  5. 错误恢复;
  6. 忘记唤醒和错误的重试;
  7. 而最根本的缺陷,是锁和条件变量不支持模块化的编程。比如一个转账业务中,A账户扣了100元钱,B账户增加了100元,即使这两个动作单独用锁保护维持其正确性,你也不能将两个操作简单的串在一起完成一个转账操作,你必须让它们的锁都暴露出来,重新设计一番。好好的两个函数,愣是不能组在一起用,这就是锁的最大悲哀;
通过这些缺点的描述,也就可以明白Chrome多线程模型的优点。它解决了锁的最根本缺陷,即,支持模块化的编程,你只需要维护对象和线程之间的职能关系即可,这个摊子,比之锁的那个烂摊子,要简化了太多。对于程序员来说,负担一瞬间从泰山降成了鸿毛。。。
而Chrome多线程模型的一个主要难点,在于线程与数据关系的设计上,你需要良好的划分各个线程的职责,如果有一个线程所管辖的数据,几乎占据了大半部分的Task,那么它就会从多线程沦为单线程,Task队列的锁也将成为一个大大的瓶颈。。。

设计者的职责
一 个底层结构设计是否成功,这个设计者是否称职,我一直觉得是有一个很简单的衡量标准的。你不需要看这个设计人用了多少NB的技术,你只需要关心,他的设 计,是否给其他开发人员带来了困难。一个NB的设计,是将所有困难都集中在底层搞定,把其他开发人员换成白痴都可以工作的那种;一个SB的设计,是自己弄 了半天,只是为了给其他开发人员一个长达250条的注意事项,然后很NB的说,你们按照这个手册去开发,就不会有问题了。。。

从根本上来说,Chrome的线程模型解决的是 并发中的用户体验问题而不是联合工作的问题(参见我前面喷的“闲话并发”),它不是和Map/Reduce那样将关注点放在数据和执行步骤的拆分上,而是 放在线程和数据的对应关系上,这是和浏览器的工作环境相匹配的。设计总是和所处的环境相互依赖的,毕竟,在客户端,不会和服务器一样,存在超规模的并发处理任务,而只是需要尽可能的改善用户体验,从这个角度来说,Chrome的多线程模型,至少看上去很美。。。

>Mark Russinovich大神帮老婆修电脑都和普通人不一样

>

The Case of the Slooooow System

A few weeks ago my wife complained that her Vista desktop was not responding to her typing or mouse clicks. Given the importance of the customer, I immediately sat down at the system to troubleshoot. It wasn’t completely hung, but extremely sluggish. For example, the mouse moved and when I clicked on the start button the start menu opened after about 30 seconds. I suspected that something was hogging the CPU and likely could have resolved the problem simply by logging off or rebooting, but knew that if I didn’t determine the root cause and address it, she’d likely be calling on my technical support services again in the near future. In any case, stooping to that kind of troubleshooting hack is beneath my dignity. I therefore set out to investigate.

My first step was to run Process Explorer to see which process was using the CPU. After a few minutes Process Explorer finally appeared and showed that not one, but two processes were involved, each consuming 50% of the CPU: Iexplore.exe and Dllhost.exe. Iexplore is Internet Explorer (IE) and I suspected that IE itself wasn’t the problem, but that it was a browser helper object (BHO), ActiveX control, or some other plugin loaded into IE. Similarly, Dllhost.exe is the host process for out-of-process COM server DLLs, so it was probably not at fault, but the COM server loaded into it. Both required digging deeper and I decided to tackle IE first.

In order to try and get some CPU headroom in which to operate, I suspended the Dllhost process by selecting it in Process Explorer, right-clicking to open the process context menu, and selecting the Suspend entry:

image

That put the Dllhost process to sleep and, as I expected, that freed up 50% of the CPU. That’s because the computer was a dual-core system and so to consume 100% of the available CPU cycles a process would have to have two threads, each hogging one of the cores. Most bugs I’ve seen that result in the CPU being pegged are caused by a single thread.

Processes don’t execute code, threads do, so I needed to look inside the IE process to see what thread or threads were running. I double-clicked on Iexplore.exe in Process Explorer to open its process properties dialog and switched to the Threads page. Several threads were running, but one was dominating the CPU:

image

From past experience I knew that Ieframe.dll was part of IE, but to be sure I clicked on the modules button on the stack dialog and switched to the Details page of the resulting Shell properties dialog:

image

The description didn’t give me a clue as the thread’s specific purpose, so I moved to the second clue about the thread, its start function. Because I had configured Process Explorer to retrieve symbols for Windows images from the Microsoft symbol server in Options->Configure Symbols, Process Explorer showed the name of the function where each thread began executing. Sometimes the DLL or function where a thread starts executing is enough to identify the thread’s purpose or the software causing a problem. In this case, the thread began in a function named CTablWindow::_TabWindowThreadProc. The function name hints that it’s the one in which the main thread of a tab starts running, but that still wasn’t enough to tell me why the thread was running so much; I needed to dig even deeper and look inside the thread to see where it was executing.

To look at what the thread was up to, I double-clicked on it in the Threads list to open the Thread Stack dialog, which shows the functions on the thread’s stack. A stack is essentially an execution history, where each function listed called the one above it on the list and the function at the top of the list is the one most recently executed by the thread at the time of Process Explorer looks at the stack. I scrolled through the list, looking for frames that referenced 3rd-party DLLs or Microsoft IE plugins, since they would be far more likely to have a bug than IE’s own code. Sure enough, I found frames pointing at a popular 3rd-party ActiveX control, Adobe Flash:

image

Just to be sure that I hadn’t happened to catch Flash running when a different component was using most of the CPU time, I closed and reopened the stack dialog several times, but all of them pointed at Flash.

The first thing I do when I suspect that some software is causing a problem is to check the vendor’s web site to make sure that I have the latest version. I opened the Process Explorer DLL view and looked at Flash.ocx’s version, went to Adobe’s site and looked at the version of the current Flash download, and they were the same.

I was at a dead end. I couldn’t know for sure if Flash had a bug or, more likely, there was a Flash application that had a bug, nor could I be sure that the problem wouldn’t recur. I tried to determine which site was hosting the Flash content by closing tabs one by one, but when I had close them all the thread was still running.

At this point the only options I had were to uninstall Flash and leave my wife with a degraded web experience, or terminate IE to stop the current CPU usage and hope that it wouldn’t happen again. I chose the latter and the case remains open. Since investigating this I’ve seen the same Flash behavior again on my wife’s system and on my own, so have been vigilantly watching the Adobe site for a new version just in case its due to a bug in Flash itself. I was disappointed that there was no actionable result of the investigation, but at least I knew what had caused the CPU usage.

I now turned my attention the Dllhost problem with the hope that I’d meet with better success. Process Explorer lists in a tooltip the component or components loaded into hosting processes like Svchost.exe (the Windows service host process), Rundll32 (the Control Panel applet hosting process), Taskeng.exe (the scheduled task hosting process on Vista and Server 2008), and Dllhost.exe. I moved the mouse over Dllhost.exe to see what COM server it was running:

image

It was running the Thumbnail Cache COM server, whose job it is to create Explorer thumbnails for image and media files. It is part of Windows, so once again I had to look inside the process for more clues. I resumed the Dllhost process I had suspended earlier and opened the process properties threads page:

image

The thread consuming the most CPU in this case started in Quartz.dll’s ObjectThread function. I looked at its properties and saw that it was another Windows DLL, the DirectShow Runtime, with a generic function name:

image

Next, I double-clicked to look at the thread stack:

image

The first few frames were in User32.dll and Ntdll.dll, core Windows system DLLs, but frames 4-7 are in the Sonicmp4demux.ax (“.ax” is an extension commonly used for DirectShow filters), a 3rd-party component. The function names for those frames were the same and didn’t make sense because the Microsoft symbol server only stores symbols for software included in Windows. Several more stack snapshots confirmed that it was the code causing the CPU usage.

Now that I had my suspect, the next step was to check for a newer version. But first I had to figure out what software the DLL came with, which was harder than it seemed. I opened the DLL view to take a closer look at the version information, but the description didn’t reveal anything:

image

There were no folders in the Start menu or items in the Add/Remove Programs list with Sonic in the name. I Windows-Live-searched (I expect that word to be added to Webster’s any day now) for Sonic and found that it’s part of the Roxio’s CD and DVD authoring software suites. I looked in the start menu and sure enough, found a Roxio folder:

image

I ran the Roxio software to check its version number and discovered that the Creator application includes a built-in facility to check for updates. I ran it, but it came up empty:

image

I checked the Roxio web site just to be sure and it turned out there was a newer version that the built-in updater hadn’t offered, perhaps because the update, according to the page, didn’t offer anything new:

image

I downloaded it anyway (all 640MB of it!) and waited the 15 or so minutes for it to install. Then I checked the version information of Sonicmp4demux.ax to see if it was newer, but its version number, 1.4.402.60802, was the same as the one I’d seen in the DLL view and the file was two years old:

image

I could have uninstalled the software, which would ensure that the problem wouldn’t return, but I wanted to keep Roxio for its DVD authoring functionality. I didn’t care if I didn’t get thumbnails for Roxio-specific image formats – I wasn’t even sure there were any I’d ever see in Explorer – so I set out to see if I could disable just the Sonic demultiplexer. I could have searched the Registry for the DLL name, which is surely where it was registered, but that’s a brute-force approach and if there were indirect or multiple references I could easily end up disabling more than just its thumbnail generation and possibly breaking something in Windows.

Process Monitor was the perfect tool for the job. Because I didn’t know when the problem might reoccur – it might takes days to reproduce – I didn’t want to just run it and let it consume all available virtual memory or disk space, so I set the History Depth in the Options menu to have Process Monitor retain only the most recent 1 million events:

image

I also set an Include filter for paths matching C:\Windows\System32\Dllhost.exe, minimized it, and let my wife have the system back.

The next day I came home from work, sat down at the computer and saw from Process Explorer that Dllhost.exe was back at it, consuming 50% of the CPU. I suspect that because it’s a dual-core system, the problem had been showing up regularly, but my wife hadn’t noticed it because the remaining CPU capacity was enough to mask it (another good reason to buy multi-core processors!). I brought Process Monitor to the foreground and noted it had seen 114,000 Dllhost operations, which was obviously way too many to scan through individually. I searched for “sonicmp4” and found a reference in a Registry query near the end of the trace:

image

The query is of a COM object registration for the demultiplexer. Because the COM object is a 3rd-party DLL, I was certain that that COM Class ID (CLSID) isn’t hard-coded into Windows, so I went back to the first entry in the trace and searched for “A7DD215”, the first few characters of the CLSID. The search found a match a few thousand operations earlier:

image

The CLSID was in the name of a Registry key under another COM object registration. I Windows-Live-searched (that just rolls off the tongue, doesn’t it?) for the parent CLSID and found this KB article that explains that the registry key is where DirectShow filters register: http://msdn.microsoft.com/en-us/library/ms787560(VS.85).aspx I took a look at the stack for the particular query to confirm that’s the reason Dllhost was reading from there:

image

I was now confident that I could simply rename the Sonic filter registration key to prevent its use. I never delete registry keys when performing this kind of troubleshooting just in case the change disables important functionality or somehow breaks something else. I had seen from the traces that the thumbnail cache generator had come across an AVI file that caused it to load the Sonic demultiplexer, a format Windows is obviously able to handle on its own, so I was pretty sure things would continue to work. After terminating the Dllhost and making the change, I browsed to the same folder, deleted the thumbnails, and confirmed that there was no reduced functionality as far as I could tell. I then used Roxio to successfully burn a DVD with a number of AVI files. This case was closed.

My wife’s system was now usable again, and though I wasn’t able to close the Flash-related part of the case, at least I knew the cause and could keep an eye out for updates. More importantly, by solving the Dllhost part of the case, even if Flash went crazy again, her system would still be usable and she wouldn’t be filing a critical support incident for it with me – thanks to Process Explorer and Process Monitor.

Published Wednesday, September 24, 2008 11:08 AM by markrussinovich

>PolicyKit

>

Ref[http://hal.freedesktop.org/docs/PolicyKit/index.html]

Introduction

About

PolicyKit is an application-level toolkit for defining and handling the policy that allows unprivileged processes to speak to privileged processes: It is a framework for centralizing the decision making process with respect to granting access to privileged operations for unprivileged applications. PolicyKit is specifically targeting applications in rich desktop environments on multi-user UNIX-like operating systems. It does not imply or rely on any exotic kernel features.

History and Prior Art

Traditionally UNIX-like operating systems have a clear distinction between ordinary unprivileged users and the almight and powerful super user ‘root’. However, in order for a user to access and configure hardware additional privileges and rights are needed. Hitherto, this have been done in a number of often OS-specific ways. For example, Red Hat based systems usually grant access to devices to a user if, and only if, the user is logged in at a local console. In contrast, Debian-based systems often relies on group membership, e.g. users in the ‘cdrom’ group can access optical drives, users in the ‘plugdev’ group can mount removable media and so on.

In addition, access was not only granted to devices; Red Hat-based systems, for example, provides a mechanism to allow a user at a local system to run certain applications (such as the system-config-* family) as the super user provided they could authenticate as the super user (typically by entering the root password using a graphical utility). Other distributions rely on sudo (with various graphical frontends) to provide similar functionality. Both the pam-console and sudo approaches doesn’t require applications to be modified.

Finally, some classes of software (such as HAL, NetworkManager and gnome-system-tools) utilizes IPC mechanism (typically D-Bus) to provide a very narrow and well-defined subset of privileged operations to unprivileged desktop applications. It varies what mechanism is used to deny users.

Defining the Problem

There’s a couple of problems with the mechanisms described in the section called “History and Prior Art”.

  • Mechanisms are coarsely grained: either you’re at the console or you’re not (pam_console). Either you’re a member of a group or you’re not (Debian). There is no easy way to specify that only a subset of privileged operations should be available for a given user (e.g. it’s hard to express “it’s fine to mount removable media; it’s not fine to mount fixed media; it’s not fine to change the timezone” in a coherent way).

  • The way most people use pam-console and sudo is fundamentally broken. Full-fledged GTK+ or Qt applications run as the super user which means that millions of line of code (including code such as image loaders that historically have lots of security problems) runs privileged. This is in direct violation of the well-known “least privilege” principle. In addition, often applications look out of place because settings in such programs now read per-user settings from root’s home directory.

  • UNIX group membership have always been problematic; if a user is a member of a group once, he can always become member of the group again (copy /bin/bash to $HOME; chown to group, set the setgid bit, done).

  • It is difficult for upstream projects (such as GNOME or KDE) to implement features that requires administrative privileges because most downstream consumers (e.g. operating systems) have different ways of implementing access control. As a result most of these features are punted to OS distributors who have their own code for doing the same thing e.g. setting the date/timezone etc.; there is no way for file sharing applications (such as gnome-user-share, Banshee, Rhythmbox) to punch a hole in the firewall.

  • Without a centralized framework, access control configuration is often scattered throughout the system which makes it hard for system administrators to grasp how to configure the system. There’s literally a bunch of different configuration files all with different formats and semantics.