叶杰平,滴滴出行副总裁
陈述题目:AI在出行范围的机遇和挑战
陈述摘记:滴滴出行是中国最大的分享出行平台,为越过5亿用户提供出行办事。每天滴滴出行平台产生越过100TB的数据,处理越过400亿条旅途谈论请求以及越过150亿条定位请求。在这个讲座里,我将分享滴滴出行若何诈骗大数据和AI的本事来分析出行数据,并为亿级用户提供高效的出行办事。
陈述东说念主简介:叶杰平博士是滴滴AI Lab矜重东说念主,滴滴出行副总裁。叶博士亦然好意思国密歇根大学毕生老师。他的专科主张为大数据、机器学习、数据挖掘、过甚在出行和生物医药范围的应用。他是多个海外顶级AI会议的资深委员会会员、区域主席和委员会副主席,包括NIPS、ICML、KDD、IJCAI、ICDM和SDM等。他亦然多个顶级AI期刊的副主编,包括DMKD, IEEE TKDE和IEEE TPAMI等。他于2010年取得好意思国国度当然科学基金会生活奖。他的盘问后果被选为顶级AI会议KDD和ICML的最好论文。
君子好色
苏中,IBM 中国盘问院盘问总监
陈述题目:Challenges in AI for Business
陈述摘记:待添加
陈述东说念主简介:苏中是IBM中国盘问院的盘问总监,大数据及领略野心盘问主张首席科学家。在2002年取得清华大学野心思系博士学位后加入IBM。在中国盘问院先后参与了文分内析、企业搜索、元数据看管、数据集成、社会化野心及信息可视化等方面的盘问。他所引导的多项本事研发被IBM软件产物遴荐,并在海外以及国内的屡次本事评估中得到第又名,也因此数次取得IBM宇宙盘问本事设立奖,在2008,2010, 2014以及2016年度取得IBM宇宙盘问凸起本事设立奖。苏中在2007年被评为IBM发明巨匠,担任盘问院专利评审委员会主席。迄今为止依然在海外顶级会议及期刊发表学术论文60余篇,50余项发明专利及专利苦求。他刻下兼任南开大学兼职老师,上海交通大学APEX实践室客座老师,IBM大中华区本事内行委员会主席,中国中语信息学会理事,CCF东说念主工智能与方法识别专委会委员。
张钧波,京东金融城市野心行状部总监
陈述题目:城市野心:用大数据和AI打造新式智能城市
陈述摘记:城市野心是野心思科学以城市为布景,跟城市谈论、交通、动力、环境和经济等学科交融的新兴范围,通过接续获取、整合和挖掘城市中不同范围的大数据来处置城市痛点,是刻下城市通向新式奢睿城市的阶梯。本陈述将展现京东城市的愿景,先容城市野心平台的架构和某些城市的奢睿城市顶层遐想决议,讲明针关于时空数据的深度学习算法和多源数据交融本事,并分享基于东说念主工智能的空气质地预测,管网水质预测,东说念主群流量预测等案例,以及基于大数据和东说念主工智能本事的信用城市体系建造。相关本事不仅发表在KDD等顶尖海外会议和期刊上君子好色,也在本色场景中落地。更多信息可参看城市野心主页:。
人妖射精陈述东说念主简介:张钧波,博士,京东金融城市野心行状部总监,掌管系数这个词行状部的AI平台、算法模子和本事研发。在加入京东之前,张钧波博士曾任微软亚洲盘问院盘问员,联思香港大数据研发中心盘问员,在香港中语大学、华为香港诺亚方舟实践室、好意思国乔治亚州立大学、比利时核盘问中心等打听使命多年,具备丰富的东说念主工智能和时空数据挖掘陶冶。在Artificial Intelligence,IEEE TKDE等海外期刊和软件学报等国内期刊及KDD,AAAI,IJCAI等海外会议上发表论文40余篇,其中最好论文3篇,在科学出书社专著1部,盘问后果取得平凡的存眷。曾取得中国东说念主工智能学会优秀博士论文提名奖,ACM分会优秀博士论文奖。
史树明,叉叉叉综合吧腾讯AI Lab当然谈话处理中心矜重东说念主
陈述题目:会通谈话,促进调换:腾讯东说念主工智能实践室的当然谈话处理盘问
陈述摘记:待添加
陈述东说念主简介:史树明博士现在是腾讯东说念主工智能实践室(AI Lab)当然谈话处理中心矜重东说念主,主要盘问主张为语义会通和智能东说念主机交互。他在ACL、EMNLP、AAAI、IJCAI 、WWW、SIGIR、TACL等一流海外会议和期刊上发表科研论文40多篇,曾屡次担任ACL、EMNLP、WWW、AAAI等会议的递次委员会委员以及TOIS、TKDE等期刊 的审稿东说念主。除学术盘问外,他在搜索、学问图谱、当然谈话会通、对话机器东说念主等方面有丰富的系统斥地和工程落地陶冶。他毕业于清华大学野心思科学 与本事系,加入腾讯之前曾任职于微软亚洲盘问院(独揽盘问员)和阿里巴巴集团(资深算法内行)。
杨红霞,阿里巴巴 资深算法内行
陈述题目:Extremely Large Scale Graphical Model in Practice
陈述摘记:Extremely large scale graphical model has been playing an increasingly important role in big data companies. In particular, graph inference combined with deep learning has achieved successful phased results in many of Alibaba's business scenarios. The data of the Alibaba ecosystem is extremely rich and varied, covering everything from shopping, travel, entertainment, and payment. We are working on the development of a new generation of graph learning platform that can efficiently perform inference analysis on billions of nodes and billions of edges. In this talk, I will share two related works that have been accepted by IJCAI and KDD 2018 respectively: 1. Network representation learning (RL) aims to transform the nodes in a network into low-dimensional vector spaces while preserving the inherent properties of the network. Though network RL has been intensively studied, most existing works focus on either network structure or node attribute information. In this paper, we propose a novel framework, named ANRL, to incorporate both the network structure and node attribute information in a principled way. Specifically, we propose a neighbor enhancement autoencoder to model the node attribute information, which reconstructs its target neighbors instead of itself. To capture the network structure, attribute-aware skip-gram model is designed based on the attribute encoder to formulate the correlations between each node and its direct or indirect neighbors. We conduct extensive experiments on six real-world networks, including two social networks, two citation networks and two user behavior networks. The results empirically show that ANRL can achieve relatively significant gains in node classification and link prediction tasks. 2. The e-commerce era is witnessing a rapid increase of mobile Internet users. Major e-commerce companies nowadays see billions of mobile accesses every day. Hidden in these records are valuable user behavioral characteristics such as their shopping preferences and browsing patterns. And, to extract these knowledge from the huge dataset, we need to first link records to the corresponding mobile devices. This Mobile Access Records Resolution (MARR) problem is confronted with two major challenges: (1) device identifiers and other attributes in access records might be missing or unreliable; (2) the dataset contains billions of access records from millions of devices. To the best of our knowledge, as a novel challenge industrial problem of mobile Internet, no existing method has been developed to resolve entities using mobile device identifiers in such a massive scale. To address these issues, we propose a SParse Identifier-linkage Graph (SPI-Graph) accompanied with the abundant mobile device pro ling data to accurately match mobile access records to devices. Furthermore, two versions (unsupervised and semi-supervised) of Parallel Graph-based Record Resolution (PGRR) algorithm are developed to effectively exploit the advantages of the large-scale server clusters comprising of more than 1,000 computing nodes. We empirically show superior performances of PGRR algorithms in a very challenging and sparse real data set containing 5.28 million nodes and 31.06 million edges.
陈述东说念主简介:Dr. Hongxia Yang is working as the Senior Staff Data Scientist and Director in Alibaba Group. Her interests span the areas of Bayesian statistics, time series analysis, spatial-temporal modeling, survival analysis, machine learning, data mining and their applications to problems in business analytics and big data. Current on-going projects in her team include huge dynamic multi-level heterogenous graphical model for user profiling system, large-scale distributed knowledge graph and its efficient inference for data enabling platform and general ensemble prediction framework for various revenue and costs forecasting, among several others. She used to work as the Principal Data Scientist at Yahoo! Inc and Research Staff Member at IBM T.J. Watson Research Center respectively and got her PhD degree in Statistics from Duke University in 2010. She has published over 30 top conference and journal papers and held 9 filed/to be filed US patents and is serving as the associate editor for Applied Stochastic Models in Business and Industry. She has been been elected as an Elected Member of the International Statistical Institute (ISI) in 2017.
周寻,爱奇艺 本事总监
陈述题目:大数据智能本事若何赋能泛文娱生态
陈述摘记:爱奇艺领有一个为数亿用户提供泛文娱办事的互联网产物矩阵,这些产物每天皆会产生海量的数据,咱们会诈骗这些数据为公司的决策和运营提供全场所的支握,把数据运行的力量融入到业务发展的各个方面。本次陈述的主要内容有:爱奇艺数据系统的本事架构和一些数据运行的鼎新应用,若何构建以用户为中心的个性化产物和数据分析平台以办事于泛文娱生态,以及在数据团队建造方面的陶冶分享等。
陈述东说念主简介:周寻,2013年加入爱奇艺,现担任本事总监,主要矜重用户画像平台,个性化保举,数据仓库,买卖分析等主张的数据产物研发和本事团队看管使命。
夏粉,智铀科技 首创东说念主
陈述题目:大范畴机器学习与AutoML
陈述摘记:简要先容机器学习的问题范畴、中枢本事、分类及紧要算法,分享第四代机器学习的表面及应用实行,先容AutoML的盘问近况、本事挑战及改日瞻望。
陈述东说念主简介:夏粉博士,毕业于中科院自动化所,师从机器学习威信王珏憨厚。前百度资深科学家,协助百度盘问院大数据实践室主任张潼(现腾讯AI Lab主任),组建50多东说念主团队,看管越过20东说念主的大范畴机器学习团队,数次荣获百度本事最高鼎新奖。曾在机器学习顶级会议ICML、NIPS等发表多篇论文。
在百度时分夏粉携带团队推出了宇宙当先的超大范畴突破寥落架构自动化机器学习平台(Pulsar),遮掩公司9 0 %以上业务线,包括百度最中枢的买卖变现系统凤巢、金融、糯米等。在公司里面机器学习平台顶用户数名挨次一,遮掩了日均流量4.5 亿,日均收入过亿,蕴蓄CTR 栽培越过50%。
此外,夏粉曾行动百度网盟CTR团队本事矜重东说念主,孤苦遐想了一套容纳万亿特征数据的、模子分钟级别更新的、自动高效深度学习的点击率预估系统,其中越过5项鼎新杰出谷歌公斥地表的本事和算法。
在百度时分,曾以绽放云首席东说念主工智能内行的身份,与多个传统行业商量智能升级、AI+的决议之后,了解到传统行业枯竭本事、AI东说念主才匮乏、升级本钱高级挑战后,夏粉博士决定潜心打造自动化机器学习产物,最大甘休镌汰东说念主工智能的使用门槛,匡助更多的传统行业拥抱东说念主工智能,于2017年6月创立智铀科技,并兼任公司首席科学家,2018岁首完成Pre-A轮融资,估值4亿。