骷髅峡谷拆机

前言

闲着没事拆拆骷髅峡谷看看,没想到出奇的简单。螺丝位置我都用红圈圈出来了。

先拆上盖

上盖

再拆下盖

上盖

拆上盖内部螺丝

上盖内部

拆下盖内部螺丝

下盖内部

把内部推出来,要把接口那地方稍微按一下,内壳部分就能拿出来了。

内部

把电路板拿出来,小心两根天线。

电路板

完工

看上去做工真的不错,顺便清理下风扇的灰尘。装回去声音小多了。

K2P B1版本刷梅林固件

前言

由于K2拿回家里用了,于是又撸了一台K2P放在办公室用,也是为了它的千兆有线网口,毕竟办公室是千兆网,通过K2P访问共享存储会方便快速很多。

早上下的单,第二天中午才到,一到手还是熟悉的开箱,长相颜值确实比K2高了很多,网上很多图片我也就不放了,银色版本,全身金属质感,炫酷。

简介

到手第一件事依旧是刷机,毕竟原生固件有众所周知的问题,但是一看我这版本是B1,哎毕竟非洲人。

K2P出了两个不同版本(或者严谨的说是三个,但是A2和A1只是在电容和屏蔽罩上有差别,CPU都是一样的,就不细分了),分别是A1和B1。其中A1采用的MTK MT7621A的处理器,而B1采用的是博通BCM47189,性能上面的差别网上众说纷纭,但是我试用下来反正千兆能跑满,也就不纠结那么多了。毕竟不花钱又能要求啥呢。

如果你是A1版本,那么恭喜你,网上有很多傻瓜方案都可以搞定,非常简单方便,大致流程无非是刷入Breed->刷入固件->开心使用吧。甚至Bootloader都有很多种,除了Breed还有其他可选。固件更是有非常多的选择,这里推荐荒野无灯的padavan,毕竟我K2也用的padavan还不错,当然如果喜欢LEDE之类的爱折腾的玩家也可以刷入LEDE等其他固件。

而我这B1版本就比较惨了(连官方都没出适配B1的固件),因为博通方案的缘故,Breed是没希望了,看了下至少还有一个梅林固件可用,并且CFE的一个小漏洞也让我们能开启telnet进行一些命令行操作,勉强够用了,期待以后有更多大神推出更多固件和BL。

刷机(仅限B1)

准备(开Telnet)

  1. 首先要进入官方的CFE恢复界面,开启一下telnet功能来备份官方固件(毕竟万一梅林不好用或者哪天用到了官方固件)。方法还是老套路,长按reset开机,10s后可以用192.168.2.1进入CFE界面。(这里吐槽一下斐讯的路由器默认IP居然是192.168.2.X段的)

  2. 去http://pan.baidu.com/s/1boIHBXH下载修改版的固件。

  3. 在计算机上启动tftp服务器,将固件解压后放入tftp服务器根目录,然后在CFE网页输入

    http://192.168.2.1/do.htm?cmd=flash±noheader+你电脑IP:固件名

固件名默认是k2p_bcm_v10d.bin+flash0.trx,而电脑IP我设置的192.168.2.2

  1. 等个几分钟就刷好了。可以ping 192.168.2.1来看,刷的时候是不通的,刷完又通了。然后重启。

备份官方固件

  1. telnet进去路由器

  2. cat /dev/mtd0 /dev/mtd1 /dev/mtd3 /dev/mtd4 /dev/mtd5 /dev/mtd6 /dev/mtd7 > /tmp/all.bin

  3. mount --bind /tmp/all.bin /www/web-static/fonts/icofont.eot

  4. 去http://192.168.2.1/web-static/fonts/icofont.eot下载固件
    下载后将icofont.eot改名为all.bin,并确认固件大小为16777216字节

刷机

  1. http://pan.baidu.com/s/1boIHBXH 下载固件
  2. 同样在tftp服务器根目录下放上固件。使用
    http://192.168.2.1/do.htm?cmd=flash±noheader+你电脑IP:固件名 刷入
    (这里固件名是K2P_Merlin_V10d.trx+flash0.trx)
  3. ping一下看刷完了用http://192.168.2.1/do.htm?cmd=nvram+erase清除下内存,然后重启。
  4. 享受梅林固件

恢复MAC地址

因为梅林和官方的内存结构不一样,所以mac地址没了,需要手动设置一下。刷完以后在web的“系统管理”-“系统设置”页面打开telnet或ssh,telnet或ssh登录名和密码是你的web登录名及密码。

ssh上去使用

设置WAN口地址
nvram set wan0_hwaddr=路由器MAC地址
设置LAN口地址
nvram set lan_hwaddr=路由器MAC地址 
nvram set et0macaddr=路由器MAC地址 
设置2.4G地址
nvram set w1_hwaddr=路由器MAC地址
nvram set wl0_hwaddr=路由器MAC地址
nvram set 0:macaddr=路由器MAC地址
设置5G地址
nvram set wl1_hwaddr=路由器MAC地址+1
nvram set sb/1/macaddr=路由器MAC地址+1
保存上述设置
nvram commit

即可

后记

刷完以后用了一段时间,表示一切正常,需要的功能(你懂得)都有,还可以开双WAN,千兆也可以跑满,不错的说。

使用Travis CI自动发布Hexo的文章

自从使用了Hexo静态博客以后是越来越懒了,连服务器环境都懒得配置了,索性直接发布在Github pages上面,后面又因为Github pages不支持https和cdn,外边又套了一层Netlify。最近又突然懒得连Hexo的环境也不想要了,甚至连服务器都不想用了。于是想想直接用github repo+Travis CI来帮我编译Hexo的静态博客。

初步的想法是把原始的post的md文件和theme放到一个repository里,然后写好.travis.yml,每次写文章后的push会触发Travis CI来进行构建,构建完毕后再把public文件夹push到Github pages的repository里,这个push会触发Netlify来拉取并完成发布。

首先需要在github的profile settings里生成一个Personal access tokens提供给Travis CI的编译后push使用。
把这个token配置到Travis CI的Environment Variables里即可。

接下来就是编写.travis.yml

language: node_js
node_js: stable
	 
# S: Build Lifecycle
install:
	  - npm install
	 
	script:
	  - hexo g
	 
	after_script:
	  - cd ./public
	  - git init
	  - git config user.name "TMs"
	  - git config user.email "tms@live.cn"
	  - git add .
	  - git commit -m "Update"
	  - git push --force --quiet "https://${TOKEN}@github.com/imtms/imtms.github.io.git" master:master
	# E: Build LifeCycle
	 
	branches:
	  only:
	    - blog-source

这里指定构建环境为nodejs,执行npm install来进行依赖安装,随后执行hexo g进行文章的编译,完成后进入到public文件夹并推送到Github pages的repository里。并且只针对blog-source分支进行编译。这里的TOKEN即为自动去环境变量里获取上面github settings里生成的TOKEN。

一切准备就绪,去Travis CI里打开针对blog-source的自动构建,进行一次push,观察结果。

顺便加上信仰徽章,可以随时观察构建状态:Build Status

这下写博客连服务器和数据库都不需要了,运行hexo的nodejs环境也不需要了。而且可以随时在任何有git的电脑写。实在不行用github的在线编辑都能写。。。感谢Travis CI,感谢Github,感谢Netlify。

根据自己的基因数据分析喝酒脸红的原因

经过一个多月的等待,自己的基因数据昨天终于出来了。昨天先看了一些基本的分析报告,今天把原始数据导入了数据库,整整60万个SNP位点,现在就来拿原始数据看一下我的性状表现。

酒精在体内的代谢是一个相对比较简单的过程。首先是乙醇通过乙醇诱导肝细胞色素酶和乙醇脱氢酶变为乙醛,乙醛再通过乙醛脱氢酶变成乙酸,最后分解为水和二氧化碳。

其中乙醛是中毒的罪魁祸首,对各脏器伤害极大,而乙酸是醋的主要成分,对人体几乎无害。

这样就可以看出,和酒精代谢主要相关的酶有乙醇诱导肝细胞色素酶、乙醇脱氢酶(ADH)和乙醛脱氢酶(ALDH),其中相应的控制基因有CYP2E1、ADH1B、ADH1C、ALDH2。

查找SNPedia资料得知

关于ADH

A SNP in rs1229984 encodes a form of the alcohol dehydrogenase ADH1B gene that significantly reduces the clearance rate of alcohol from the liver. This SNP is also known as Arg48His, with the (G) allele corresponding to the Arg and the (A) to the His.
Known in the literature as ADH22 or sometimes ADH1B2, the allele with increased activity (meaning more rapid oxidation of ethanol to acetaldehyde) is His48, encoded by rs1229984(A). Individuals with one or especially two ADH2*2 alleles, ie genotypes rs1229984(A;G) or rs1229984(A;A) are more likely to find drinking unpleasant and have a somewhat reduced risk for alcoholism.
A study of over 3,800 cases of “upper aerodigestive” cancers (mouth/throat, voice box, and esophageal cancers) concluded that the rs1229984(A) allele (in dbSNP orientation) has a protective effect. Carriers of this allele had a 0.56x (decreased; p=10-10) risk of having one of these cancer types.

说明ADH1B基因的rs1229984(SNP)(G>A)会导致个体间乙醇代谢为乙醛的速度存在差异,携带rs1229984(A)基因的人群会加速乙醇至乙醛的转化。如果为(A;A)则转化速率更快。同时rs1229984(A)对上消化道癌症(食管癌、吼癌等)有保护作用,可以降低得这些癌症的风险。

关于ALDH

rs671 is a classic SNP, well known in a sense through the phenomena known as the “alcohol flush”, also known as the “Asian Flush” or “Asian blush”, in which certain individuals, often of Asian descent, have their face, neck and sometimes shoulders turn red after drinking alcohol.
The rs671(A) allele of the ALDH2 gene is the culprit, in that it encodes a form of the aldehyde dehydrogenase 2 protein that is defective at metabolizing alcohol. This allele is known as the ALDH*2 form, and individuals possessing either one or two copies of it show alcohol-related sensitivity responses including facial flushing, and severe hangovers (and hence they are usually not regular drinkers). Perhaps not surprisingly they appear to suffer less from alcoholism and alcohol-related liver disease.

说明ALDH2基因的rs671(SNP)(G>A)会导致个体乙醛代谢为乙酸的速度存在差异,携带rs671(A)基因的人转换乙醛为乙酸的速度要慢,导致体内乙醛水平的积聚。

查看我自己的基因数据

rs1229984为AA
rs671为AG

综上分析可知,我的基因型为ADH效率高,能迅速将乙醇转化成乙醛,但乙醛代谢为乙酸速度较慢,会导致体内乙醛水平积累,表现为喝酒脸红,全身红,容易表现出恶心呕吐症状,是乙醛积累的结果。

Linux下TCP延迟确认(Delay ACK)机制

本文的起因是周师傅早上突然问起为何本应四次挥手的TCP,抓包发现只有三个包,看图显示为客户端主动发送FIN的情况下,少了一个服务器对客户端的ACK回复,而是直接发送了FIN+ACK。

此时我还不知道是因为Linux的Delay ACK机制造成的。于是针对这个现象展开了一波研究。

首先找到定义TCP协议的RFC793的文档 https://tools.ietf.org/html/rfc793 的3.5部分,文档表示

     TCP A                                                TCP B

  1.  ESTABLISHED                                          ESTABLISHED

  2.  (Close)
      FIN-WAIT-1  --> <SEQ=100><ACK=300><CTL=FIN,ACK>  --> CLOSE-WAIT

  3.  FIN-WAIT-2  <-- <SEQ=300><ACK=101><CTL=ACK>      <-- CLOSE-WAIT

  4.                                                       (Close)
      TIME-WAIT   <-- <SEQ=300><ACK=101><CTL=FIN,ACK>  <-- LAST-ACK

  5.  TIME-WAIT   --> <SEQ=101><ACK=301><CTL=ACK>      --> CLOSED

  6.  (2 MSL)
      CLOSED

                         Normal Close Sequence

                               Figure 13.



      TCP A                                                TCP B

  1.  ESTABLISHED                                          ESTABLISHED

  2.  (Close)                                              (Close)
      FIN-WAIT-1  --> <SEQ=100><ACK=300><CTL=FIN,ACK>  ... FIN-WAIT-1
                  <-- <SEQ=300><ACK=100><CTL=FIN,ACK>  <--
                  ... <SEQ=100><ACK=300><CTL=FIN,ACK>  -->

  3.  CLOSING     --> <SEQ=101><ACK=301><CTL=ACK>      ... CLOSING
                  <-- <SEQ=301><ACK=101><CTL=ACK>      <--
                  ... <SEQ=101><ACK=301><CTL=ACK>      -->

  4.  TIME-WAIT                                            TIME-WAIT
      (2 MSL)                                              (2 MSL)
      CLOSED                                               CLOSED

                      Simultaneous Close Sequence

显然不管是单方发起还是双方同时发起关闭连接,都是有四次挥手的。

只好从源代码下手去寻找答案。由于周师傅抓包的是HTTP的情况,于是从服务器端的nginx源码研究起。
nginx/src/os/unix/ngx_socket.h里显示

ioctl(FIONBIO) sets a non-blocking mode with the single syscall
while fcntl(F_SETFL, O_NONBLOCK) needs to learn the current state
using fcntl(F_GETFL).
ioctl() and fcntl() are syscalls at least in FreeBSD 2.x, Linux 2.2
and Solaris 7.
ioctl() in Linux 2.4 and 2.6 uses BKL, however, fcntl(F_SETFL) uses it too.

nginx是使用系统函数来管理TCP连接的。所以这不是nginx的锅,于是找到Linux内核源码来研究系统对TCP的管理。

首先找到linux/net/ipv4/tcp.c来观察系统对于TCP_ESTABLISHED时收到FIN包如何切换状态。
在tcp_set_state函数中找到了linux对tcp状态的变化维护表

static const unsigned char new_state[16] = {
  /* current state:        new state:      action:    */
  [0 /* (Invalid) */]	= TCP_CLOSE,
  [TCP_ESTABLISHED]	= TCP_FIN_WAIT1 | TCP_ACTION_FIN,
  [TCP_SYN_SENT]	= TCP_CLOSE,
  [TCP_SYN_RECV]	= TCP_FIN_WAIT1 | TCP_ACTION_FIN,
  [TCP_FIN_WAIT1]	= TCP_FIN_WAIT1,
  [TCP_FIN_WAIT2]	= TCP_FIN_WAIT2,
  [TCP_TIME_WAIT]	= TCP_CLOSE,
  [TCP_CLOSE]		= TCP_CLOSE,
  [TCP_CLOSE_WAIT]	= TCP_LAST_ACK  | TCP_ACTION_FIN,
  [TCP_LAST_ACK]	= TCP_LAST_ACK,
  [TCP_LISTEN]		= TCP_CLOSE,
  [TCP_CLOSING]		= TCP_CLOSING,
  [TCP_NEW_SYN_RECV]	= TCP_CLOSE,	/* should not happen ! */
};

而对FIN包的处理是由linux/net/ipv4/tcp_input.c里面的tcp_fin函数负责

/*
 * 	Process the FIN bit. This now behaves as it is supposed to work
 *	and the FIN takes effect when it is validly part of sequence
 *	space. Not before when we get holes.
 *
 *	If we are ESTABLISHED, a received fin moves us to CLOSE-WAIT
 *	(and thence onto LAST-ACK and finally, CLOSE, we never enter
 *	TIME-WAIT)
 *
 *	If we are in FINWAIT-1, a received FIN indicates simultaneous
 *	close and we go into CLOSING (and later onto TIME-WAIT)
 *
 *	If we are in FINWAIT-2, a received FIN moves us to TIME-WAIT.
 */


void tcp_fin(struct sock *sk)
{
	struct tcp_sock *tp = tcp_sk(sk);

	inet_csk_schedule_ack(sk);

	sk->sk_shutdown |= RCV_SHUTDOWN;
	sock_set_flag(sk, SOCK_DONE);

	switch (sk->sk_state) {
	case TCP_SYN_RECV:
	case TCP_ESTABLISHED:
		/* Move to CLOSE_WAIT */
		tcp_set_state(sk, TCP_CLOSE_WAIT);
		inet_csk(sk)->icsk_ack.pingpong = 1;
		break;

	case TCP_CLOSE_WAIT:
	case TCP_CLOSING:
		/* Received a retransmission of the FIN, do
		 * nothing.
		 */
		break;
	case TCP_LAST_ACK:
		/* RFC793: Remain in the LAST-ACK state. */
		break;

	case TCP_FIN_WAIT1:
		/* This case occurs when a simultaneous close
		 * happens, we must ack the received FIN and
		 * enter the CLOSING state.
		 */
		tcp_send_ack(sk);
		tcp_set_state(sk, TCP_CLOSING);
		break;
	case TCP_FIN_WAIT2:
		/* Received a FIN -- send ACK and enter TIME_WAIT. */
		tcp_send_ack(sk);
		tcp_time_wait(sk, TCP_TIME_WAIT, 0);
		break;
	default:
		/* Only TCP_LISTEN and TCP_CLOSE are left, in these
		 * cases we should never reach this piece of code.
		 */
		pr_err("%s: Impossible, sk->sk_state=%d\n",
		       __func__, sk->sk_state);
		break;
	}

	/* It _is_ possible, that we have something out-of-order _after_ FIN.
	 * Probably, we should reset in this case. For now drop them.
	 */
	skb_rbtree_purge(&tp->out_of_order_queue);
	if (tcp_is_sack(tp))
		tcp_sack_reset(&tp->rx_opt);
	sk_mem_reclaim(sk);

	if (!sock_flag(sk, SOCK_DEAD)) {
		sk->sk_state_change(sk);

		/* Do not send POLL_HUP for half duplex close. */
		if (sk->sk_shutdown == SHUTDOWN_MASK ||
		    sk->sk_state == TCP_CLOSE)
			sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_HUP);
		else
			sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_IN);
	}
}

函数做了状态转换和乱序包的处理。而在TCP_ESTABLISHED这里发现一个有趣的现象,没有马上调用tcp_send_ack进行ACK回复,而是执行了
inet_csk(sk)->icsk_ack.pingpong = 1;
这个icsk_ack.pingpong是什么呢,顺藤摸瓜的找到了include/net/inet_connection_sock.h

@icsk_ack: Delayed ACK control data

	struct {
		__u8		  pending;	 /* ACK is pending			   */
		__u8		  quick;	 /* Scheduled number of quick acks	   */
		__u8		  pingpong;	 /* The session is interactive		   */
		__u8		  blocked;	 /* Delayed ACK was blocked by socket lock */
		__u32		  ato;		 /* Predicted tick of soft clock	   */
		unsigned long	  timeout;	 /* Currently scheduled timeout		   */
		__u32		  lrcvtime;	 /* timestamp of last received data packet */
		__u16		  last_seg_size; /* Size of last incoming segment	   */
		__u16		  rcv_mss;	 /* MSS used for delayed ACK decisions	   */ 

其中pingpong的作用是表明这个session是交互式。这个标志位是delay ack的一个control位。查阅了一下icsk_ack.pingpong和delay ack的资料,发现正常的挥手流程是这样的

  1. client: FIN (will not send more)
  2. server: ACK (received the FIN)
    … server: sends more data…, client ACKs these data
  3. server: FIN (will not send more)
  4. client: ACK (received the FIN)

而下面有说

If the server has no more data to send it might close the connection also. In this case steps 2+3 can be merged, e.g. the server sends a FIN+ACK, where the ACK acknowledges the FIN received by the client.

也就是说如果服务器端在客户端发出FIN以后,如果有数据要发送,需要先ACK这个FIN,然后再进行数据发送。但是如果服务器端没有更多数据发送,也要关闭连接的情况下,很可能ACK包就跟随FIN一起发出。其中ACK为确认客户端的FIN包。

查询RFC1122的 4.2.3.2 When to Send an ACK Segment 得知

         4.2.3.2  When to Send an ACK Segment
            A host that is receiving a stream of TCP data segments can
            increase efficiency in both the Internet and the hosts by
            sending fewer than one ACK (acknowledgment) segment per data
            segment received; this is known as a "delayed ACK" [TCP:5].

            A TCP SHOULD implement a delayed ACK, but an ACK should not
            be excessively delayed; in particular, the delay MUST be
            less than 0.5 seconds, and in a stream of full-sized
            segments there SHOULD be an ACK for at least every second
            segment.

原来:
TCP采用两种方式来发送ACK:快速确认和延迟确认。
在快速确认模式中,本端接收到数据包后,会立即发送ACK给对端。
在延迟确认模式中,本端接收到数据包后,不会立即发送ACK给对端,而是等待一段时间,如果在此期间:

  1. 本端有数据包要发送给对端。就在发送数据包的时候捎带上此ACK,如此一来就节省了一个报文。
  2. 本端没有数据包要发送给对端。延迟确认定时器会超时,然后发送纯ACK给对端。

具体实现上面用
icsk->icsk_ack.pingpong == 0,表示使用快速确认。
icsk->icsk_ack.pingpong == 1,表示使用延迟确认。
而对周师傅遇到的情况的FIN包的处理刚好是在icsk->icsk_ack.pingpong == 1的场景。于是服务端的FIN和ACK合并发送了。

参考资料:
https://github.com/torvalds/linux/
http://blog.csdn.net/wdscq1234/article/details/52430382
http://blog.csdn.net/dog250/article/details/52664508
http://stackoverflow.com/questions/21390479/fin-omitted-fin-ack-sent