目录
项目中埋点信息需要解析用户信息的地理位置信息,如果有经纬度信息,我们可以通过百度地图或者高德地图的API解析,但一般接口都是有次数限制或者收费的。这里我们通过解析IP地址获取用户(访问者)的地理位置。
通过解析IP地址获取用户(访问者)的地理位置。通过查询资料,一般解析IP地址使用映射和查询库是最优的方案。查询速度快,查询准确。这里使用ip2region项目。详细下面介绍。
- 参考项目:
- 使用ip2region项目[https://github.com/lionsoul2014/ip2region](https://github.com/lionsoul2014/ip2region),
- ip2region - 最自由的ip地址查询库,ip到地区的映射库;
- 数据聚合了一些知名ip到地名查询提供商的数据(淘宝IP地址库,GeoIP,纯真IP库)
- 使用
- 依赖ip2region-1.7.2.jar包
- ip地址库的资源文件=ip2region.db
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.build.targetJdk>1.7</project.build.targetJdk>
<project.report.outputEncoding>UTF-8</project.report.outputEncoding>
<project.report.inputEncoding>UTF-8</project.report.inputEncoding>
<hive.version>0.11.0</hive.version>
<hadoop.version>2.2.0</hadoop.version>
<junit.version>4.12</junit.version>
<commons-codec.version>1.10</commons-codec.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>${hive.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>${hive.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit.version}</version>
</dependency>
<dependency>
<groupId>org.lionsoul</groupId>
<artifactId>ip2region</artifactId>
<version>1.7.2</version>
</dependency>
<dependency>
<groupId>nl.basjes.parse.useragent</groupId>
<artifactId>yauaa-hive</artifactId>
<classifier>udf</classifier>
<version>5.11</version>
</dependency>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>${commons-codec.version}</version>
</dependency>
<dependency>
<groupId>net.sourceforge.javacsv</groupId>
<artifactId>javacsv</artifactId>
<version>2.0</version>
</dependency>
<dependency>
<groupId>com.github.codesorcery</groupId>
<artifactId>juan</artifactId>
<version>0.2.0</version>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
</dependency>
<dependency>
<groupId>org.lionsoul</groupId>
<artifactId>ip2region</artifactId>
</dependency>
<dependency>
<groupId>nl.basjes.parse.useragent</groupId>
<artifactId>yauaa-hive</artifactId>
<classifier>udf</classifier>
</dependency>
<dependency>
<groupId>net.sourceforge.javacsv</groupId>
<artifactId>javacsv</artifactId>
</dependency>
<dependency>
<groupId>com.github.codesorcery</groupId>
<artifactId>juan</artifactId>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<version>1.6</version>
<scope>system</scope>
<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/java</sourceDirectory>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<excludes>
<exclude></exclude>
</excludes>
</artifactSet>
<shadedArtifactAttached>true</shadedArtifactAttached>
</configuration>
</execution>
</executions>
</plugin>
<!-- Tweak the compiler to use more memory and use UTF-8 for the source code. -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>${project.build.targetJdk}</source>
<target>${project.build.targetJdk}</target>
<encoding>${project.build.sourceEncoding}</encoding>
<showWarnings>true</showWarnings>
</configuration>
</plugin>
<!-- Resource plugins should always use UTF-8 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<version>2.6</version>
<configuration>
<encoding>${project.build.sourceEncoding}</encoding>
</configuration>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>release</id> <!-- 部署要用到 -->
<build>
<plugins>
<!-- Source -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
<version>2.2.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>jar-no-fork</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- Javadoc -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.9.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- GPG -->
<plugin> <!-- 进行延签 -->
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-gpg-plugin</artifactId>
<version>1.6</version>
<executions>
<execution>
<phase>verify</phase>
<goals>
<goal>sign</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<distributionManagement>
<snapshotRepository>
<id>snapshots</id>
<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
</snapshotRepository>
<repository>
<id>releases</id>
<url>https://oss.sonatype.org/service/local/staging/deploy/maven2/</url>
</repository>
</distributionManagement>
</profile>
</profiles>
package com.my.hive.udf.ipgeo;
public class ConstantsGeoIp {
public static final String GEOLITE2_CITY_FILE= "GeoLite2-City.mmdb";
public static final String SEP= "\\|";//分隔
public static final String SEP_IP2REGION= "\\|";//分隔
public static final String FILE_IP2REGION= "ip2region.db";//分隔
public static final String KEY_COUNTRY_ID= "countryID"; //国家
public static final String KEY_COUNTRY_NAME= "countryName";
public static final String KEY_COUNTRY_NAME_EN= "countryNameEn";
public static final String KEY_PROVINCE_ID= "provinceID"; //省份
public static final String KEY_PROVINCE_NAME= "provinceName";
public static final String KEY_PROVINCE_NAME_EN= "provinceNameEn";
public static final String KEY_CITY_ID= "cityID"; //城市
public static final String KEY_CITY_NAME= "cityName";
public static final String KEY_CITY_NAME_EN= "cityNameEn";
public static final String KEY_ISP_ID= "ispID"; //运营商,如电信
public static final String KEY_ISP_NAME= "ispName";
public static final String KEY_ISP_NAME_EN= "ispNameEn";
public static final String KEY_REGION_ID= "regionID"; //华南
public static final String KEY_REGION_NAME= "regionName";
public static final String KEY_REGION_NAME_EN= "regionNameEn";
public static final String KEY_CONTINENT_ID= "continentID"; //大洲
public static final String KEY_CONTINENT_NAME= "continentName";
public static final String KEY_CONTINENT_NAME_EN= "continentNameEn";
public static final String KEY_LATITUDE= "latitude"; //经度纬度
public static final String KEY_LONGITUDE= "longitude";
}
package com.my.utils.ip;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class IpUtils {
public static boolean isIpV4(String ipAddress) {
String ip = "([1-9]|[1-9]\\d|1\\d{2}|2[0-4]\\d|25[0-5])(\\.(\\d|[1-9]\\d|1\\d{2}|2[0-4]\\d|25[0-5])){3}";
Pattern pattern = Pattern.compile(ip);
Matcher matcher = pattern.matcher(ipAddress);
return matcher.matches();
}
}
package com.my.hive.udf.ipgeo;
import java.io.*;
import java.net.URI;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hive.ql.exec.Description;
import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
import org.apache.hadoop.hive.ql.metadata.HiveException;
import org.apache.hadoop.hive.ql.udf.generic.GenericUDF;
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory;
import org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector;
import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;
import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils;
import org.apache.hadoop.io.Text;
import org.lionsoul.ip2region.DataBlock;
import org.lionsoul.ip2region.DbConfig;
import org.lionsoul.ip2region.DbMakerConfigException;
import org.lionsoul.ip2region.DbSearcher;
import com.my.utils.ip.IpUtils;
import org.apache.hadoop.hive.ql.session.SessionState;
@Description(name = "ip2geo", value = "_FUNC_(array) - Returns map type of the ip address.\n"
+ "Based on https://github.com/lionsoul2014/ip2region.\nThe ip address database has stored in local resource. \n"
+ " > Para1: Ipadress\n"
+ "Example:\n"
+ " > CREATE TEMPORARY FUNCTION ip2geo AS 'com.jet.hive.udf.ipgeo.UDFIp2Region' \n"
+ " > SELECT ip2geo('221.226.1.30' ),ip2geo('221.226.1.30' )['provinceName'] \n"
+ "")
public class UDFIp2Region extends GenericUDF{
PrimitiveObjectInspector inputOI;
private static DbSearcher searcher=null;
private static boolean isNonInit = true; //未被init过
private static List<String> fieldNames = null;
private static byte[] data;
private static Configuration conf;
private static FileSystem fs;
private static InputStream in;
static {
//加载数据
ByteArrayOutputStream out = null;
try {
//修改成你的HDFS上的文件ip2region.db地址路径,不用加ip和端口
String uri = "hdfs:///warehouse/dd/auxlib/ip2region.db";
conf = new Configuration();
fs = FileSystem.get(URI.create(uri), conf);
in = fs.open(new Path(uri));
out = new ByteArrayOutputStream();
byte[] b = new byte[1024];
while (in.read(b) != -1) {
out.write(b);
}
// 提高性能,将ip2region.db一次从hdfs中读取出来,缓存到data字节数组中以重用,
// 避免每来一条数据读取一次ip2region.db
data = out.toByteArray();
out.close();
in.close();
} catch (Exception e){
e.printStackTrace();
}
finally {
try {
if(out != null) {
out.close();
}
if(in != null) {
in.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
/**
* 构造searcher
* @throws UDFArgumentException
*/
private static synchronized void constructUDFIp2RegionByOut() throws UDFArgumentException {
try {
if(searcher==null){
DbConfig config = new DbConfig();
searcher = new DbSearcher(config, data);
}
isNonInit=false;
}catch (Exception e) {
throw new UDFArgumentException("Error: read file:"+ConstantsGeoIp.FILE_IP2REGION);
}
}
@Override
public ObjectInspector initialize(ObjectInspector[] arguments) throws UDFArgumentException {
if(isNonInit){
constructUDFIp2RegionByOut();
if(fieldNames==null){
fieldNames=new ArrayList<String>();
fieldNames.add(ConstantsGeoIp.KEY_COUNTRY_NAME);
fieldNames.add(ConstantsGeoIp.KEY_REGION_NAME);
fieldNames.add(ConstantsGeoIp.KEY_PROVINCE_NAME);
fieldNames.add(ConstantsGeoIp.KEY_CITY_NAME);
fieldNames.add(ConstantsGeoIp.KEY_ISP_NAME);
}
isNonInit=false;
}
// 存储在全局变量的ObjectInspectors元素的输入
inputOI = PrimitiveObjectInspectorFactory.javaStringObjectInspector;
// 返回变量输出类型
return ObjectInspectorFactory.getStandardMapObjectInspector(
PrimitiveObjectInspectorFactory.writableStringObjectInspector,
PrimitiveObjectInspectorFactory.writableStringObjectInspector);
}
/**
* 解析IP,返回map类型的地址信息
* @param arguments ip地址
* @return map类型的地址信息
* @throws HiveException
*/
@Override
public Object evaluate(DeferredObject[] arguments) throws HiveException {
String ip = PrimitiveObjectInspectorUtils.getString(arguments[0].get(), inputOI).trim();
String reString=null;
if(ip!=null && IpUtils.isIpV4(ip)){
try {
DataBlock dataBlock = searcher.memorySearch(ip);
reString=dataBlock.getRegion();
} catch (IOException e) {
reString="0|0|0|0|0";
}
}
else{
reString="0|0|0|0|0";
}
String[] ipArray = reString.split(ConstantsGeoIp.SEP_IP2REGION);
Map<Text, Text> reMap = new HashMap<Text, Text>();
for (int i = 0; i < fieldNames.size(); i++) {
Text t = ipArray[i].equals("0")?null:new Text(ipArray[i]);
reMap.put(new Text(fieldNames.get(i)), t);
}
return reMap;
}
@Override
public String getDisplayString(String[] arg0) {
return arg0[0];
}
}
##将ip2region.db这个文件上传到HDFS一个文件夹中使用
比如上传到/warehouse/dd/auxlib/ip2region.db,使用时将UDFIp2Region.java中的
String uri = "hdfs:///warehouse/dd/auxlib/ip2region.db";修改一下即可。
ip2region.db最新文件下载地址ip2region.db-Hive文档类资源-CSDN下载
cd ${project_home}
mvn clean package -DskipTests
命令执行完成后, 将会在target目录下生成[A=my-hive-udf-\${version}-shaded.jar,
B=my-hive-udf-\${version}.jar]文件.其中A是包括所有依赖包的jar, B是最小编译jar文件
#dd_database_bigdata为database名称,ipgeo为方法名称
#create temporary function 创建的是临时方法,仅对当前session有效
create temporary function dd_database_bigdata.ipgeo as 'com.my.hive.udf.ipgeo.UDFIp2Region' USING JAR 'hdfs:///warehouse/dd/auxlib/my-hive-udf-1.0.0-shaded.jar';
#create function 创建的是永久方法
create function dd_database_bigdata.ipgeo as 'com.my.hive.udf.ipgeo.UDFIp2Region' USING JAR 'hdfs:///warehouse/dd/auxlib/my-hive-udf-1.0.0-shaded.jar';
select ipgeo('221.226.1.11'), ipgeo('221.226.1.11')['provinceName']
##结果
{"cityName":"南京市","countryName":"中国","ispName":"电信","regionName":null,"provinceName":"江苏省"} 江苏省
文章浏览阅读255次。题目:用N个三角形最多可以把平面分成几个区域?Input输入数据的第一行是一个正整数T(1<=T<=10000),表示测试数据的数量.然后是T组测试数据,每组测试数据只包含一个正整数N(1<=N<=10000).Output对于每组测试数据,请输出题目中要求的结果.Sample Input212Sample Output28题意:如上所示,用N个三..._用n个三角形最多可以把平面分成几个区域?
文章浏览阅读785次。When I 'Run' my project in Android Studio, in the 'Messages' window, I get: 当我在Android Studio中“运行”我_androidstudio complier erro output
文章浏览阅读3.2w次,点赞16次,收藏90次。对于这个问题我也是从网上找了很久,终于解决了这个问题。首先遇到这个问题,应该确认虚拟机能不能正常的上网,就需要ping 网关,如果能ping通说明能正常上网,不过首先要用命令route -n来查看自己的网关,如下图:第一行就是默认网关。现在用命令ping 192.168.1.1来看一下结果:然后可以看一下电脑上面百度的ip是多少可以在linux里面ping 这个IP,结果如下:..._linux桥接ping不通baidu
文章浏览阅读512次。小妹在这里已经卡了2-3天了,研究了很多人的文章,除了低版本api 17有成功外,其他的不是channel null 就是没反应 (channel null已解决)拜托各位大大,帮小妹一下,以下是我的程式跟 gradle, 我在这里卡好久又没有人可问(哭)![image](/img/bVcL0Qo)public class MainActivity extends AppCompatActivit..._android 权限申请弹窗 横屏
文章浏览阅读1.4k次,点赞4次,收藏6次。valid padding(有效填充):完全不使用填充。half/same padding(半填充/相同填充):保证输入和输出的feature map尺寸相同。full padding(全填充):在卷积操作过程中,每个像素在每个方向上被访问的次数相同。arbitrary padding(任意填充):人为设定填充。..._cnn “相同填充”(same padding)
文章浏览阅读790次,点赞29次,收藏28次。手绘了下图所示的kafka知识大纲流程图(xmind文件不能上传,导出图片展现),但都可提供源文件给每位爱学习的朋友一个人可以走的很快,但一群人才能走的更远。不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎扫码加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长![外链图片转存中…(img-Qpoc4gOu-1712656009273)][外链图片转存中…(img-bSWbNeGN-1712656009274)]
文章浏览阅读55次。近日,苹果向所有开发者推送警告邮件,宣布未来将禁用 APP 内部的“动态分发”功能。并要求开发者在自家 APP 中删除 JSPatch 相关框架,否则 APP 将面临下架或禁止上架。截止发稿,已有部分开发者新递交的APP受此影响被苹果审核部门拒绝。这一动作,宣告着 APP Store 为“热更新”判了“死刑”,未来应用更新则将进入“原生”时代,用户需..._ios对开发者应用更新频次有限制吗
文章浏览阅读7.8k次。在VSCode中运行Jupyter Notebook_vscode jupyter notebook
文章浏览阅读122次。我要推荐的第一本书便是大名鼎鼎的《Structure and Interpretation of Computer Programs》,在国内可以买到中译版,即机械工业出版社的《计算机程序的构造与解释》。 抽象豪不夸张地说,这是一本影响了好几代程序员的书。自从上世纪80年代MIT开始使用这本书作为教材开始,它使用Lisp语言——直到前两年才被Python取代,但是使用哪本教材不得而知,由这..._老赵书拖
文章浏览阅读6.1k次,点赞5次,收藏53次。图像处理之常见二值化方法汇总图像二值化是图像分析与处理中最常见最重要的处理手段,二值处理方法也非常多。越精准的方法计算量也越大。本文主要介绍四种常见的二值处理方法,通常情况下可以满足大多数图像处理的需要。主要本文讨论的方法仅针对RGB色彩空间。方法一:该方法非常简单,对RGB彩色图像灰度化以后,扫描图像的每个像素值,值小于127的将像素值设为0(黑色),值大于等于127..._web 图像二值画
文章浏览阅读502次,点赞23次,收藏16次。在网站的整个开发过程中,首先对系统进行了需求分析,设计出系统的主要功能模块,其次对网站进行总体规划和详细设计,最后对基于Spring Boot的社区团购系统进行了系统测试,包括测试概述,测试方法,测试方案等,并对测试结果进行了分析和总结,进而得出系统的不足及需要改进的地方,为以后的系统维护和扩展提供了方便。现在的时代科技飞速地发展,网络交易已经深入大众的生活。项目开发的过程中,要按照规划、分期实施,特别是要注意在项目开发过程中要有条理,从点到面,一步步完善,不要贪图进度,要循环渐进的对项目进行开发。
文章浏览阅读308次。哈希算法:将字符串映射为数字形式,十分巧妙,一般运用为进制数,进制据前人经验,一般为131,1331时重复率很低,由于字符串的数字和会很大,所以一般为了方便,一般定义为unsigned long long,爆掉时,即为对 2^64 取模,可以对于任意子序列的值进行映射为数字进而进行判断入门题目链接:AC代码:#include<bits/stdc++.h>using na..._ac算法 哈希