当前位置: 首页 > wzjs >正文

美食网站开发的目标微信如何投放广告

美食网站开发的目标,微信如何投放广告,网站页脚设计代码,门户定制网站建设公司上一篇《Docker部署Spark大数据组件》中,日志是输出到console的,如果有将日志输出到文件的需要,需要进一步配置。 配置将日志同时输出到console和file 1、停止spark集群 docker-compose down -v 2、使用自带log4j日志配置模板配置 cp -f …

上一篇《Docker部署Spark大数据组件》中,日志是输出到console的,如果有将日志输出到文件的需要,需要进一步配置。

配置将日志同时输出到console和file

1、停止spark集群

docker-compose down -v

 2、使用自带log4j日志配置模板配置

cp -f log4j2.properties.template log4j2.properties

编辑log4j2.properties,进行如下修改;但是,如下方案,日志无法轮转,也就是说日志一直会写到spark.log中。

# Set everything to be logged to the console and file

……

rootLogger.appenderRef.file.ref = file

# File appender
appender.file.type = File
appender.file.name = file
appender.file.fileName = spark.log
appender.file.layout.type = PatternLayout
appender.file.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex

3、配置支持日志轮转

rootLogger.appenderRef.file.ref = file

改为

rootLogger.appenderRef.rolling.ref = rolling

# File appender 下的配置删掉,增加如下配置:

# RollingFile appender
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = logs/spark.log
appender.rolling.filePattern = logs/spark-%d{yyyy-MM-dd}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30

可以直接使用如下配置模板:

cat >log4j2.properties <<'EOF'
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## Set everything to be logged to the console and rolling file
rootLogger.level = info
rootLogger.appenderRef.stdout.ref = console
rootLogger.appenderRef.rolling.ref = rolling# Console appender
appender.console.type = Console
appender.console.name = console
appender.console.target = SYSTEM_ERR
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex# RollingFile appender
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = logs/spark.log
appender.rolling.filePattern = logs/spark-%d{yyyy-MM-dd}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n%ex
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30# Set the default spark-shell/spark-sql log level to WARN. When running the
# spark-shell/spark-sql, the log level for these classes is used to overwrite
# the root logger's log level, so that the user can have different defaults
# for the shell and regular Spark apps.
logger.repl.name = org.apache.spark.repl.Main
logger.repl.level = warnlogger.thriftserver.name = org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver
logger.thriftserver.level = warn# Settings to quiet third party logs that are too verbose
logger.jetty1.name = org.sparkproject.jetty
logger.jetty1.level = warn
logger.jetty2.name = org.sparkproject.jetty.util.component.AbstractLifeCycle
logger.jetty2.level = error
logger.replexprTyper.name = org.apache.spark.repl.SparkIMain$exprTyper
logger.replexprTyper.level = info
logger.replSparkILoopInterpreter.name = org.apache.spark.repl.SparkILoop$SparkILoopInterpreter
logger.replSparkILoopInterpreter.level = info
logger.parquet1.name = org.apache.parquet
logger.parquet1.level = error
logger.parquet2.name = parquet
logger.parquet2.level = error# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
logger.RetryingHMSHandler.name = org.apache.hadoop.hive.metastore.RetryingHMSHandler
logger.RetryingHMSHandler.level = fatal
logger.FunctionRegistry.name = org.apache.hadoop.hive.ql.exec.FunctionRegistry
logger.FunctionRegistry.level = error# For deploying Spark ThriftServer
# SPARK-34128: Suppress undesirable TTransportException warnings involved in THRIFT-4805
appender.console.filter.1.type = RegexFilter
appender.console.filter.1.regex = .*Thrift error occurred during processing of message.*
appender.console.filter.1.onMatch = deny
appender.console.filter.1.onMismatch = neutral
EOF

验证生效

1、启动spark集群

2、查看日志文件

http://www.dtcms.com/wzjs/520394.html

相关文章:

  • 北京高级网站建设今日国际新闻大事
  • 网站设置iis日志网络营销品牌策划
  • 网站开发公司能不能去谷歌推广平台
  • 接网站开发做多少钱武汉推广系统
  • 做网站手机版和电脑版怎么区分免费crm
  • 山东省城乡建设部网站百度广告投诉电话客服24小时
  • an网站建设seo公司品牌哪家好
  • 广州网站设计培训关键词排名查询工具有哪些
  • 网页设计与制作大赛seo发贴软件
  • 湛江做网站seo的百度网盘客户端
  • 洛阳做网站的百度推广账号注册流程
  • 免费做三级网站有哪些网站seo推广
  • 手机版网站建设合同范本黑帽seo技巧
  • wordpress 第三方支付开源seo软件
  • 奖励软件下载网站市场营销十大经典案例
  • 电子商务网站推广计划百度信息流广告
  • 如何购买网站虚拟主机最佳搜索引擎磁力王
  • 网站后端开发流程广州网站建设系统
  • 电子商务公司设计网站建设新东方烹饪学校学费价目表
  • 网页站点的用途网站里的友情链接
  • 平台网站建设有哪些方面推广宣传文案
  • 江苏公众科技网站建设seo就是搜索引擎广告
  • 网站开发专业简历互动营销
  • 网站制作学什么软件有哪些深圳网络推广团队
  • 免费网站建设排行榜引流客户的最快方法是什么
  • 茶叶网站建设策划书ppt上海网络推广培训机构
  • 标识公司网站关键词排名seo
  • w5500做服务器网站什么是seo
  • 南京营销型网站建设优化是什么梗
  • 企业网站做的比较好网络推广渠道