accv论文格式
accv论文格式
ACCV,即亚洲计算机视觉会议(Asian Conference on Computer Vision)是亚洲计算机视觉联盟(AFCV)举办。1993年举办第一届,每两年举办一次。ACCV为中国计算机学会CCF推荐人工智能会议。论文录取率20%~25%,是仅次于计算机视觉三大顶会的会议,近年学术水平及等级进一步提高。其目的是为计算机视觉研究学者和企业提供一个技术发展和交流的平台。一般来说ABC类都是指的CCF所推荐的高水平会议期刊,也就是
就像上面所说的,CCF是计算机领域的权威人士排的评级,而SCI分区是根据期刊影响因子来划分的。SCI的范围更广,涵盖学科更多,而CCF专注计算机领域。可以理解为评断的标准不同,但不管是SCI还是CCF的ABC类,含金量都是很高的。其实CCF的评级标准也不是完全科学的~并不是C类就一定比B类差。这个评级也是业内许多大佬博弈而来的成果,每位大佬的意见想法也不可能绝对相同的。所以也不必要太纠结ABC类。
计算机视觉的顶会顶刊!
CV三大会议
CVPR: International Conference on Computer Vision and Pattern Recognition (每年,6月开会)
ICCV: International Conference on Computer Vision (奇数年,10月开会)
ECCV: European Conference on Computer Vision (偶数年,3月截稿,9月开会)
CV两大顶刊
TPAMI: IEEE Trans on Pattern Analysis and Machine Intelligence
TIP: IEEE Transactions on Image Processing
计算机视觉会议列表
A类
ICCV: International Conference on Computer Vision
CVPR: International Conference on Computer Vision and Pattern Recognition
AAAI: AAAI Conference on Artificial Intelligence
ICML: International Conference on Machine Learning
NIPS: Annual Conference on Neural Information Processing Systems
ACM MM: ACM International Conference on Multimedia
B类
ECCV: European Conference on Computer Vision
C类
ACCV: Asian Conference on Computer Vision
ICPR: International Conference on Pattern Recognition
BMVC: British Machine Vision Conference
计算机视觉刊物列表
A类
TPAMI: IEEE Trans on Pattern Analysis and Machine Intelligence
IJCV: International Journal of Computer Vision
TIP: IEEE Transactions on Image Processing
B类
CVIU: Computer Vision and Image Understanding
PR: Pattern Recognition
C类
IET-CVI: IET Computer Vision
IVC: Image and Vision Computing
IJPRAI: International Journal of Pattern Recognition and Artificial Intelligence
Machine Vision and Applications
PRL: Pattern Recognition Letters
追论文技巧
个人的一点心得:
arXiv就是用来占坑的,不建议刷;
会议论文代表了经过同行认可的高质量论文,且时效性很高。但是数量依然不少,依然需要甄别选读;
建议关注“新智元”、“机器之心”、“雷锋网”等优质公众号,可以第一时间接收到最新且最值得一读的论文推送;
期刊发表的论文都是经过时间考验的,都会比会议晚个一年半~两年,时效性差,不建议关注。
10.23——10.27《深度自适应》
领域自适应:
多用于文本分类,属于直推式迁移学习,直推式迁移学习定义:给定一个源域和相应的学习任务,一个目标域和相应的学习任务,直推式学习旨在利用源域和目标域中相同的知识来提高目标域中的目标预测函数。
《基于深度学习的体态与手势感知计算关键技术研究》
基于深度学习的肌电手势识别:
并不需要任何附加信息或手工设计的特征提取器,基于高密度肌电信号(HD-sEMG),使用二维阵列电极采集的肌电信号,使得肌肉活动产生的电势场在时间和空间上的变化可被多个紧密分布在皮肤表面的电极同时记录下来。HD-sEMG中的肌电信号描绘了位于电极覆盖区域内的肌肉活动的时空分布,同时HD-sEMG的瞬时值呈现了在特定时间点肌肉活动所涉及的生理过程的相对全局的测量。瞬时HD-sEMG内部可区分出不同手势模式,可以将采集到的HD-Semg描绘出电势在空间的分布,其对应的热度图即为肌电图像,肌电图像中的像素数(分辨率)由其采集设备中的电极阵列决定,即电极的数量及其电极间距离(例如,具有16行8列的电极网格可W采集8*16像素的肌电图像)。
主要是将原始肌电信号值从(-1,1)映射到(0,255),即,其中x是原始肌电信号,I是肌电图像。构建一个8层CNN结构,网络的前两个卷积层用于提取公共的底层图片特征,作者发现瞬时肌电图像在不同的空间位置上表现出不同的视觉特征。在不同手势中,肌电图像在中部偏下以及顶部的条状区域上亮度较强,提出在3,4层加入局部连接结构(受人脸识别前沿工作的启发),因为局部连接层在不同空间位置上的卷积模板的权重不共享,可以更好的提取图片上不同位置的特征。并依据单个窗口内每帧识别出的手势标签中所占比例最高的标签,因为上述实验仅适用于肌电幅值较大的数据进行训练和测试可以获得较高的手势识别准确率,因此需要对肌电信号采用全波整流和低通滤波(全波整流和低通滤波是被广泛采用的肌电信号幅值估计方法),以获取更好的肌电信号。
基于深度领域自适应的肌电手势识别:
当训练集和测试集的肌电信号来自不同的采集会话的情况。因为电极位移,肌肉疲劳,电极和皮肤之间的阻抗变化等因素的干扰,肌电信号与采集会话高度相关,已经训练好的手势分类器直接被应用在新的会话时通常准确率较低。因为肌电信号的分布在不同的会话之间变化很大,所以来自不同会话的基于瞬时肌电信号的手势识别可以相应地表示为多源领域自适应问题。
当标定数据未标记时,该论文采用自适应批量归一化(AdaBN, Adaptive Batch Normalization)对手势分类器进行适配。假设用于区分不同手势的知识存储在每个层的权重中,AdaBN不需要适配数据的手势标签,而是随着无标签的适配数据的增加,逐步更新少量的网络参数。给定输入U,BN将其转换为V,其中第i个输入特征的转换公式为:
l在训练阶段,每个BN层对于每个源域的均值统计量和方差统计量是独立计算的。因为训练阶段的BN对每个数据批次独立计算统计量,所以只需要确保每个数据批次中的样本来自同一个会话。
l识别阶段,对于给定的未标记数据A,AdaBN执行正向传播算法,更新参数。
该方法准确率:单幅30.5%,150毫秒窗口-39.2%,而另一种算法特征集(150毫秒窗口)和线性判断:34.1%。
随机选择未标记的测试集的子集(0.1%,0.5%,1%,5%,10%)进行深度领域自适应,之后再评测整个测试集上的手势识别的准确率。最后观测到大约5%的适配数据后准确率达到巅峰,适配数据20000帧,在CSL-HDEMG的2048赫兹的采样率下大约10秒。
并且适配算法并不需要观测到所有种类的手势,从27种选择5个和13个进行适配,最终结果分别是31.3%(73.2%),34.6%(81.4%)另一种方法是肌电地势(sEMG topography),定义为肌电信号在时间上的二维平均强度图,其中每个像素是某个通道的肌电信号在特定时间窗口内的均方根,用于手势识别。
《Revealing Critical
Channels and Frequency Bands for Emotion Recognition from EEG with Deep Belief
Network》
在基于脑电信号的情感识别任务中,多通道脑电信号存在不相关的脑电信号,这不仅会引起噪声,还会降低系统对情感识别能力。该论文提出一种新的深度信念网(DBN)来检查用于情感识别的关键EEG信道和频段。
主要从行为和生理反应进行情感分析,因为EEG与表情手势相比,具有较高的准确性和客观评价性。该论文采用ESI神经扫描系统,从62通道电极帽以采样率为1000Hz记录脑电信号。每个实验有15个测试,每个测试包括15s提示,45s测试及反馈,5s休息。盖论文一共评价了30个实验。
先下采样原始脑电数据到200Hz,之后使用0.3Hz到50Hz的带通滤波器滤除噪声和伪影,之后采用之前提出的微分熵(differential entropy)特征[1][2],对于固定长度的脑电信号,微分熵相当于一定频段内的对数能量谱。此前已经证明微分熵在低频和高频能量之间具有识别EEG模式的能力,因此在五个频段计算微分熵特征(δ:1-3Hz,θ:4 – 7Hz,α:8-13Hz,β:14-30Hz,γ:31-50Hz),使用256点的短时傅里叶变换,并将特征归一化到0-1。
利用五个频段的去噪后的62通道的特征作为输入,DBN达到86.08%的准确率和8.34%标准差,本论文通过分析经过训练的DBN的权重分布来检验关键通道和频带,权重对于识别情感模型是很重要的,因为对于学习任务贡献较大的神经元权值将增加,不相关的神经元权值趋于随机分布,图1为权重在第一层神经网络训练后的分布,可以看出主要在beta和gamma波的权重最大,这说明此频带包含更重要的鉴别信息。
从图2中我们可以看出侧颞区和前额脑区相比其他脑区在beta和gamma频带更容易激活。因此可以得出结论,在识别积极,中性和负面情绪时侧颞叶和前额叶通道是关键通道,beta和gamma是关键频带。
如图3所示,依据脑区中权重分布的特点,设计了四种不同的电极放置剖面,包括4通道,6通道,9通道和12通道,其中4通道的最佳平均精度和标准差为82.88%/10.92%,而所有62通道的最佳平均精度和标准差为83.99%/10.92%,这说明四个相对电极阻轮廓(four profiles of relative electrode sets)FT7,T7,FT8,T8是辨别情感特征的电极。
[1]Duan R N, Zhu J Y, Lu B L. Differential entropyfeature for EEG-based emotion classification[C]// International Ieee/embsConference on Neural Engineering. IEEE, 2013:81-84.
[2]Zheng W L, Zhu J Y, Peng Y, et al. EEG-based emotionclassification using deep belief networks[C]// IEEE International Conference onMultimedia and Expo. IEEE, 2014:1-6.
脑电论文(大脑解码:行为,情绪):
Real-time naive learning of neural correlates in ECoG Electrophysiology
神经实时朴素学习相关的皮层电生理
地址:
A Deep Learning Method for Classification of EEG Data Based on MotorImagery
基于运动表象的脑电数据分类的深度学习方法
地址:
Affective state recognition from EEG with deep belief networks
基于深层信念网络的脑电情感状态识别
地址:
A Novel Semi-Supervised Deep Learning Framework for Affective StateRecognition on EEG Signals
一种用于脑电信号情感状态识别的半监督深度学习框架
地址:
Revealing critical channels and frequency bands for emotion recognitionfrom EEG with deep belief network
用深层信念网络揭示脑电情感识别的关键通道和频带
地址:
EEG-based emotion recognition using deep learning network withprincipal component based covariate shift adaptation
基于深度学习网络的主成分协移自适应的脑电情感识别
地址:
Classifying EEG recordings of rhythm perception
节律性脑电记录分类
地址:
Using Convolutional Neural Networks to
Recognize Rhythm Stimuli from Electroencephalography Recordings利用卷积神经网络识别脑电记录中的节律刺激
地址:
Convolutional neural network with embedded Fourier transform for EEGclassification
基于嵌入傅立叶变换的卷积神经网络在脑电信号分类中的应用
地址:
Continuous emotion detection using EEG signals and facial expressions
基于脑电信号和表情的连续情绪检测
地址:
‘Deep Feature Learning for EEG Recordings
脑电记录的深部特征学习
地址:
异常分类论文(阿兹海默症,癫痫,睡眠阶段检测):
Classification of Electrocardiogram Signals with Deep Belief Networks
基于深层信念网络的心电信号分类
Modeling electroencephalography waveforms with semi-supervised deepbelief nets: fast classification and anomaly measurement
半监督深信网模拟脑电波形:快速分类和异常测量
Deep belief networks used on high resolution multichannelelectroencephalography data for seizure detection
用于癫痫检测的基于高分辨率多道脑电图数据的深度信念网
地址:
Deep Learning in the EEG Diagnosis of Alzheimer’s Disease
深层学习在阿尔茨海默病脑电诊断中的应用
Sleep stage classification using unsupervised feature learning
基于无监督特征学习的睡眠阶段分类
Classification of patterns of EEG synchronization for seizureprediction
癫痫发作的脑电同步模式分类
地址:
Recurrent neural network based prediction of epileptic seizures inintra-and extracranial EEG
基于递归神经网络的颅内外脑电癫痫发作预测
EEG-based lapse detection with high temporal resolution
基于脑电信号的高时间分辨率检测
地址:
张盛平的学术论文
Shengping Zhang, Huiyu Zhou, Feng Jiang, Xuelong Li, Robust visual tracking using structurally ran-dom projection and weighted least squares, IEEE Transactions on Circuits and Systems for Video Technology(TCSVT), DOI:10.1109/TCSVT.2015.2406194Shengping Zhang, Shiva Kasiviswanathan, Pong C Yuen, Mehrtash Harandi, Online Dictionary Learning on Symmetric Positive Definite Manifolds with Vision Applications, The Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI 2015), accepted Shengping Zhang, Shiva Kasiviswanathan, Pong C Yuen, Mehrtash Harandi, Online Dictionary Learningon Symmetric Positive Definite Manifolds with Vision Applications, The Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI 2015), accepted Feng Jiang, Shengping Zhang, Shen Wu, Yang Gao, Debin Zhao, Multi-layered Gesture Recognition withKinect, Journal of Machine Learning Research (JMLR), acceptedShengping Zhang, Hongxun Yao, Xin Sun, Kuanquan Wang, Jun Zhang, Xiusheng Lu, Yanhao Zhang, Action recognition based on overcomplete independent component analysis, Information sciences, 281:635–647,2014Shengping Zhang, Huiyu Zhou, Baochang Zhang, Zhenjun Han, Yuliang Guo, “Signal, image and video pro-cessing” special issue: Semantic representations for social behavior analysis in video surveillance systems,Signal Image and Video Processing, 8(1):73–74, 2014Lin Zeng, Shengping Zhang, Jun Zhang, Yunlu Zhang, Dynamic image mosaic via SIFT and dynamic pro-gramming, Machine Vision and Applications, 25(5):1271–1282, 2014Feng Jiang, Shengping Zhang, Bo-Wei Chen, Kun Li, Debin Zhao, Image Restoration via Multi-Prior Col-laboration, 12th Asian Conference on Computer Vision (ACCV 2014), acceptedYanhao Zhang, Shengping Zhang, Qingming Huang, Thomas Serre, Learning Sparse Prototypes for Crowd Perception via Ensemble Coding Mechanisms, ECCV 2014 5th International Workshop on Human Behavior Understanding (HBU 2014), pp. 86–100, 2014 Shengping Zhang, Hongxun Yao, Xin Sun, Xiusheng Lu, Sparse Coding Based Visual Tracking: Review andExperimental Comparison, Pattern Recognition, 46(7):1772–1788, 2013Shengping Zhang, Hongxun Yao, Huiyu Zhou, Xin Sun, Shaohui Liu,Robust Visual Tracking Based on Online Learning Sparse Representation,Neurocomputing, 100(1):31-40, 2013【SCI&EI】Xun Tang, Shengping Zhang, Hongxun Yao, Sparse Coding Based Motion Attention for Abnormal EventDetection, International Conference on Image Processing (ICIP 2013), pp. 3602–3606, 2013Xuesong Jiang, Hongxun Yao, Shengping Zhang, Xiusheng Lu, Wei Zeng, Night Video Enhancement Using Improved Dark Channel Prior, International Conference on Image Processing (ICIP 2013), pp. 553–557,2013Lin Zeng, Weigang Zhang,Shengping Zhang, Dingwen Wang,Video image mosaic implement based on planar-mirror-based catadioptric system,Signal Image and Video Processing, 10.1007/s11760-012-0413-2 Shaohui Liu, Hongxun Yao, Shengping Zhang, Wen Gao,A Progressive Quality Hiding Strategy Based on Equivalence Partitions of Hiding Units,LNCS Transaction on Data Hiding and Multimedia Security, Volumn 6730, pp. 58–82, 2011Xin Sun, Hongxun Yao, Shengping Zhang,A Novel Supervised Level Set Method for Non-Rigid Object Tracking,International Conference on Computer Vision and Pattern Recognition(CVPR 2011), pp.3393–3400, 2011【EI&ISTP】Baochang Zhang, Shengping Zhang, Jianzhuang Liu,Sparse Regression Analysis for Object Recognition,IEEE International Conference on Image Processing(ICIP 2011), pp.2381-2384, 2011【EI&ISTP】Xin Sun, Hongxun Yao, Shengping Zhang,Contour Tracking Via On-line Discriminative Appearance Modeling Based Level Sets,IEEE International Conference on Image Processing(ICIP 2011),pp.2317-2320,2011【EI&ISTP】Zhongqian Sun, Hongxun Yao, Shengping Zhang, Xin Sun,Robust Visual Tracking via Context Objects Computing,IEEE International Conference on Image Processing(ICIP 2011), pp.509-512,2011【EI&ISTP】Xin Sun, Hongxun Yao, Shengping Zhang,Robust Object Tracking via Inertial Potential based Mean Shift,ACM International Conference on Internet Multimedia Computing and Service(ICIMCS 2011), pp.178-181,2011【EI】 Shengping Zhang, Junpeng Wu, Tian Yuan, Shaohui Liu, Xin Sun,Robust Visual Tracking Based on Occlusion Detection and Particle Redistribution,ACM International Conference on Internet Multimedia Computing and Service(ICIMCS 2010), Harbin, China, December 30-31, 2010: 159-162【EI】Shengping Zhang, Hongxun Yao, Xin Sun, Shaohui Liu,Robust object tracking based on sparse representation,SPIE International Conference on Visual Communications and Image Processing(VCIP 2010), Huang Shan, An Hui, China, Jul.11-14, 2010: 77441N-1-8【EI】Shengping Zhang, Hongxun Yao, Shaohui Liu,Partial Occlusion Robust Object Tracking Using An Effective Appearance Model,SPIE International Conference on Visual Communications and Image Processing(VCIP 2010), Huang Shan, An Hui, China, Jul.11-14, 2010: 77442U-1-8【EI】Shengping Zhang, Hongxun Yao, Peipei Gao,Robust Object Tracking Combining Color and Scale Invariant Features,SPIE International Conference on Visual Communications and Image Processing(VCIP 2010), Huang Shan, An Hui, China, Jul.11-14, 2010: 77442R-1-8【EI】Shengping Zhang, Hongxun Yao, Shaohui Liu,Robust Visual Tracking Using Feature-based Visual Attention,International Conference on Acoustics, Speech, and Signal Processing(ICASSP 2010), Dallas, Texas, USA, March 14-19, 2010: 1150-1153【EI】Zhisheng Xu, Shengping Zhang, Julong Pan, Xin Sun, Shaohui Liu,Robust visual tracking combining global and local appearance models,ACM International Conference on Internet Multimedia Computing and Service(ICIMCS 2010), Harbin, China, December 30-31, 2010: 155-158【EI】Xin Sun, Hongxun Yao, Shengping Zhang,On-Line Discriminative Appearance Modeling for Robust Object Tracking,International Conference on Pervasive Computing, Signal Processing and Applications(PCSPA 2010), Harbin, China, September 17-19, 2010: 78-81【EI】Xin Sun, Hongxun Yao, Shengping Zhang,A Refined Particle Filter Method for Contour Tracking,SPIE International Conference on Visual Communications and Image Processing(VCIP 2010), Huang Shan, An Hui, China, Jul.11-14, 2010: 77441M-1-8【EI】Shaohui Liu, Hongxun Yao, Shengping Zhang, Wen Gao,A Steganography Strategy Based on Equivalence Partitions of Hiding Units,IEEE International Conference on Multimedia & Expo(ICME 2010), Suntec, Singapore, Jul. 19-23, 2010: 1299-1304【EI】Xin Sun, Hongxun Yao, Shengping Zhang,Adaptive Particle Filter Based on Energy Filed for Robust Object Tracking in Complex Scenes,Pacific-Rim Conference on Multimedia(PCM 2010), Shanghai, China, September 21-24, 2010: 437-448【EI】Shaohui Liu, Feng Shi, Jicheng Wang, Shengping Zhang,An Improved Spatial Spread-Spectrum Video Watermarking,International Conference on Intelligent Computation Technology and Automation(ICICTA 2010), Changsha, China, May 11-12, 2010: 587-590【EI】Feng Shi, Shaohui Liu, Hongxun Yao, Yan Liu, Shengping Zhang,Scalable and Credible Video Watermarking Towards Scalable Video Coding,Pacific-Rim Conference on Multimedia(PCM 2010), Shanghai, China, September 21-24, 2010: 697-708【EI】Haibo Liu, Changjian Zhou, Jing Shen, Pingke Li, Shengping Zhang,Video Caption Detection Algorithm Based on Multiple Instance Learning,International Conference on Internet Computing for Science and Engineering(ICICSE 2010), 2010: 20-24. Shengping Zhang, Hongxun Yao, Shaohui Liu,Dynamic Background Subtraction Based on Local Dependency Histogram,International Journal of Pattern Recognition and Artificial Intelligence(IJPRAI) Volume 23, Number 7, pp. 1397-1419, 2009 【SCI&EI】Shengping Zhang, Hongxun Yao, Shaohui Liu,Spatial-temporal Nonparametric Background Subtraction in Dynamic Scenes,IEEE International Conference on Multimedia & Expo(ICME 2009), New York, USA, June 28–July 3, 2009: 518–521【EI&ISTP】Qin Cui, Peng Liu, Shengping Zhang, Jiafeng Liu, Xianglong Tang,Moving Objects Detection Exploiting Spatial and Temporal Cues in Dynamic Scenes,Journal of Natural Science of Heilongjiang University, Volume 26, Number 6, 781-784, 2009.(In chinese) Shengping Zhang, Hongxun Yao,A Novel Feature-level Multiple HMMs Classifier for Lipreading Based on Ada-Boost Gabor Kernels Selection,International Conference on Computer Vision,Pattern Recognition and Image Processing(CVPRIP 2008), Shenzhen, China, pp. 1-6, December 15-20, 2008. 【ISTP】Shengping Zhang, Hongxun Yao, Shaohui Liu,Dynamic Background Subtraction Based on Local Dependency Histogram,The Eighth IEEE International Workshop on Visual Surveillance(VS 2008), Marseille, France, October 17, 2008: ing Zhang, Hongxun Yao, Shaohui Liu, Xilin Chen, Wen Gao,A Covariance-based Method for Dynamic Background Subtraction,IEEE International Conference on Pattern Recognition(ICPR 2008), Florida, USA, December 8–11, 2008: 3141-3144【EI&ISTP】Shengping Zhang, Hongxun Yao, Shaohui Liu,Dynamic Background Modeling and Subtraction Using Spatio-temporal Local Binary Patterns,IEEE International Conference on Image Processing(ICIP 2008), California, USA, October 12–15, 2008: 1556-1559【EI&ISTP】 Shengping Zhang, Hongxun Yao, Yuqi Wan, Dan Wang,Combining Global and Local Classifiers for Lipreading,The Second International Conference on Affective Computing and Intelligent Intelligence(ACII 2007) Lisbon, Portugal, September 12–14, 2007: 733-744【ISTP】
上一篇:成考论文查重么
下一篇:知网论文黏贴