Win8 VPN 720 ERROR FIX

I got my Windows 8.1 updated on Lenovo T440s Laptop the other days, and found my personal VPN unsuccessfully connected. What’s weired is my Android and IOS devices connected normally. So I reinstalled my PPTP and L2TP based on IP-SEC, and it turned out that the same issue with me. What’s going on?

First of all, I double checked that all different devices had been able to connect my VPN server, so it’s not about my VPS. Secondly, I updated win8 which may be quite the cause of abnormity. Thirdly, it verified that my Ubuntu client connected successfully.

So the anwer is clear, it has nothing to do with my VPS, but about the win8. Actually, (the following solution came from the internet) Microsoft has errors with the newest WAN MINI port drivers since Jan. 2013 which I found it  in a forum, and MS shouldn’t solve it! So the solutions are as follows:
1) Uninstall the drivers related to WAN Miniport (IP), WAN Miniport (IPv6) and WAN Miniport (Network Monitor).
2) Start -> Run ->  Regedit -> HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class{4d36e972-e325-11ce-bfc1-08002be10318} -> Export
3) Edit the exported regist file and locate the above three subkey and deleted them all, double check you don’t delete the wrong part. Those subkeys has a “DriverDesc” which matches your broken miniports.
4) Go back to Device Manager, and now you are able to update the WAN MINI PORT drivers with errors. Right click the WAN MINI PORT IP (4 eg) -> Update Driver Software -> Browse My Computer -> Let me pick driver from a list -> uncheck “Show compatible hardware” and wait for the drivers listing generation. Then choose the first “Microsoft” drivers listed and pull the polling row to the top and then choose the first [BlueTooth Personal], and ignore warnings. Then the driver will become a fake blue tooth driver and you can delete it now.
5) Repeat step 4 for the remaining  WAN MINIport IPv6 and  WAN MINIport (Network Monitor)
6) Reboot, then all the right drivers will be installed automatically.

KVM源代码阅读笔记

暂时以x86为例子,下同,KVM在X86下表现为一个驱动. 这里主要讨论AMD的svm,驱动代码主要位于arch/x86/kvm/svm.c. 英特尔的vmx也会顺带提及.不多说,直接上干货!

加载驱动模块svm.ko -> module_init(svm_init) (svm.c)加粗括号内为函数所在文件名或路径,下同.

svm_init(kvm_init)
     调用kvm_init(kvm_x86_ops,sizeof (struct vcpu_svm))(virt/kvm/kvm_main.c)
开始初始化,参数为kvm_x86_ops(x86.h)
intel的vmx.c中的初始化函数同样会调用kvm_init(),只是参数kvm_x86_ops, sizeof struct vcpu_svm/vcpu_vmx不同.

至此OS得到:
1. kvm_x86_ops(svm.c)中预定义的N多函数.
2. vcpu_svm(kvm_svm.h)我们需要用到它的大小(size)作为参数
3. 调用kvm_init(kvm_main.c)

继续,kvm_init()首先调用了kvm_init_debug(),它:
创建了一些debug entry, kvm_stats_debugfs_item结构体在x86.c中初始化,之后kvm_init()开始调用kvm_arch_init(opaque)(x86.c),

  • 使用传入的kvm_x86_ops参数,留心下会发现这个东西是通过一个void *类型的指针opaque传入的(svm.c).
  • 进入kvm_arch_init(opaque),首先它检测当前的OS是否支持kvm并初始化 kvm_x86_ops全局指针,然后调用kvm_mmu_module_init()(mmu.c)
    • 这个函数初始化了三个暂存的cache,然后调用kvm_init_msr_list()(msr的全称为machine specific registers)它通过rdmr_safe()把msr保存到全局变量msrs_to_save[]数组.
  • 回到kvm_arch_init()(x86.c),调用kvm_mmu_set_mask_ptes()(mmu.c)
    至此,kvm_arch_init()结束返回.

之后返回到kvm_init().调用kvm_arch_hardware_setup()(x86.c),实际上调用的是 kvm_x86_ops->hardware_setup(),仍以svm为例,假设我们使用svm.c中的kvm_x86_ops连接了参数结构 体,则需要跳至svm.c中的svm_hardware_setup()函数(intel的vmx则是vmx.c中的 hardware_setup())

  • 进入svm_hardware_setup()分配两个内存页,然后两页全部填充1,之后再init_msrpm_offsets()申请一个全局内存页变量,同样全部填充1,注意第二次申请页时调用了set_msr_interception()对可以拦截的MSR进行设置标记.
  • 然后通过宏对每一个虚拟cpu调用svm_cpu_init()(for_each_possible_cpu(cpu))
    • svm_cpu_init()
      1. 为传入的cpu分配一个svm_cpu_data结构体sd
      2. 为sd的cpu字段初始化
      3. 为sd的saved_data变量分配一个页面

 

  • per_cpu(svm_data, cpu) = svm_data (include/asm-generic/percpu.h)展开宏定义
    • per_cpu(var, cpu) => (*SHIFT_PERCPU_PTR(&(var), per_cpu_offset(cpu))) => #define SHIFT_PERCPU_PTR(__p, __offset)
      ({ __verify_pcpu_ptr((__p));
      RELOC_HIDE((typeof(*(__p)) __kernel __force *)(__p), (__offset)); \
      })
      最终调用了RELOC_HIDE(include/linux/compiler-gcc.h) 猜想可能是一个隐藏重定位用的
  • svm.c文件的起始位置有行代码static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);(include/linux/percpu-defs.h)会在运行时在栈空间创建名与svm_data相关的几个系统变量.
  • 我们的目标是应该可以通过调用per_cpu(svm_data,cpu)可以获取当前cpu变量(描述结构体),而为了cpu值则可以通过调用get_cpu_var()(include/asm-generic/percpu.h)获取.为了达到这一目的,我们需要对每一个虚拟cpu做如下操作
    per_cpu(svm_data, cpu) = svm_data
  • 这时每个虚拟cpu均被初始化,进而保证以后每当调用per_cpu()的时候能找到当时存在这里的svm_cpu_data和svm_data->save_data. (貌似说太多了,其实就是初始化)
    为了以后可以这么用
    int me = raw_smp_processor_id();
    sd = per_cpu(svm_data, me);

再次返回kvm_init()(kvm_main.c)对每一个online的cpu调用smp_call_function_single(cpu,kvm_arch_check_processor_compat,&r,1);

  • 先看kvm_arch_check_processor_compat,它返回 kvm_x86_ops->check_processor_compatibility.进入svm.c看这个void函数发现它仅仅强制参数指针指向的值转换为0,所以还是关心smp_call_function_single()(kernel/smp.c)好了,它让第一个参数指定的cpu运行第二个参数传入的回调函数,即刚刚提到的check_processor_compatibility, smp_ops.smp_call_function_mask(mask, func, info, wait)

之后再次回到kvm_init()(kvm_main.c),调用register_cpu_notifier(&kvm_cpu_notifier)注册cpu状态变化的通知函数

  • 注册kvm_cpu_notifier回调函数.notifier_call = kvm_cpu_hotplug,
  • kvm_cpu_hotplug
    • 分别处理CPU_DYING, CPU_UP_CANCELED 与 CPU_ONLINE三种通知,并使能或禁止cpu虚拟化特性
    • hardware_enable
      • hardware_enable_nolock()
        • kvm_arch_hardware_enable()
    • hardware_disable

然后注册重启通知函数,原理同上.留着重启的时候运行.

通过kmem_cache_create()(mm/slab_common.c)创建sizeof(vcpu_size)大小的内存,猜测这个函数可以满足内存的对齐要求.

然后把早先传给kvm_init()的参数THIS_MODULE,也就是svm的模块名分别赋值给三个file operation结构体变量:
    kvm_chardev_ops.owner = module;
    kvm_vm_fops.owner = module;
    kvm_vcpu_fops.owner = module;
这三个变量都是file_operation结构体被部分初始化的全局变量,分别用户处理对不同介质的设备读写(一切皆是文件).被部分初始化的函数入口地址分别指向
kvm_dev_ioctl,kvm_vm_release,noop_llseek等函数(kvm_main.c)熟悉设备驱动的同学们应该不会感到陌生.

之后,调用misc_register()(drivers/char/misc.c)注册一个主设备号为10,次设备号为232的misc设备.随后把kvm的一些比较核心的函数注册到

  • syscore_ops结构体,包括resume,suspend,
  • preempt_ops结构体,包括kvm_sched_in,kvm_sched_out

后者(preempt_ops)是结构体kvm_preempt_ops的变量,从命名方式看,这两个函数可以申请调度器对任务的换入换出进行通知.

最后调用kvm_init_debug(),忘了说,kvm模块的加载依赖debugfs,所以在加载之前要手动挂载之,好在目前流行的发行版几乎都会开启了debugfs支持.这个函数建立了debugfs目录下的kvm目录,然后建立了一系列复杂的tracepoint,这里就不展开讲了.

至此,初始化完成.

参考资料:

  1. KVM : SMALL LOOK-INSIDE
  2. Linux内核源代码kvm树, 下载点这里