Related
I had implement a interceptor of myabtis. but we found a problem, execute interceptor lead to throw so many IllegalAccessException, it affects cpu performence
Shown below is where the problem is, why did not check access permision of feild befor executed code "field.get(target)".
public class GetFieldInvoker implements Invoker {
private final Field field;
public GetFieldInvoker(Field field) {
this.field = field;
}
#Override
public Object invoke(Object target, Object[] args) throws IllegalAccessException {
try {
return field.get(target);
} catch (IllegalAccessException e) {
if (Reflector.canControlMemberAccessible()) {
field.setAccessible(true);
return field.get(target);
} else {
throw e;
}
}
}
#Override
public Class<?> getType() {
return field.getType();
}
}
the intercepor of mine:
#Intercepts({
#Signature(
type = StatementHandler.class,
method = "prepare",
args = {Connection.class, Integer.class})
})
public class SqlIdInterceptor implements Interceptor {
private static final int MAX_LEN = 256;
private final RoomboxLogger logger = RoomboxLogManager.getLogger();
#Override
public Object intercept(Invocation invocation) throws Throwable {
StatementHandler statementHandler = realTarget(invocation.getTarget());
MetaObject metaObject = SystemMetaObject.forObject(statementHandler);
BoundSql boundSql = (BoundSql) metaObject.getValue("delegate.boundSql");
String originalSql = boundSql.getSql();
MappedStatement mappedStatement =
(MappedStatement) metaObject.getValue("delegate.mappedStatement");
String id = mappedStatement.getId();
if (id != null) {
int len = id.length();
if (len > MAX_LEN) {
logger.warn("too long id", "id", id, "len", len);
}
}
String newSQL = "# " + id + "\n" + originalSql;
metaObject.setValue("delegate.boundSql.sql", newSQL);
return invocation.proceed();
}
#SuppressWarnings("unchecked")
public static <T> T realTarget(Object target) {
if (Proxy.isProxyClass(target.getClass())) {
MetaObject metaObject = SystemMetaObject.forObject(target);
return realTarget(metaObject.getValue("h.target"));
}
return (T) target;
}
}
Flame Graph
enter image description here
enter image description here
I need help, how to avoid throw exceptions, is any other way to reslove this problem?
thanks.
I'm planning to switch from Hibernate Search 5.11 to 6, but can't find the way to query DSL for range query on LocalDateTime. I prefer to use native Lucene QueryParser. In previous version I used NumericRangeQuery, because using #FieldBridge (convert to long value).
Here are my previous version codes.
#Entity
...
#NumericField //convert to long value
#FieldBridge(impl = LongLocalDateTimeFieldBridge.class)
#Field(index = Index.YES, analyze = Analyze.NO, store = Store.NO)
private LocalDateTime createDate;
...
These are QueryParser
public class NumericLocalDateRangeQueryParser extends QueryParser {
private static final Logger logger = LogManager.getLogger();
private String f;
private static final Long DEFAULT_DATE = -1L;
private String dateFormat;
public NumericLocalDateRangeQueryParser(final String f, final Analyzer a) {
super(f, a);
this.f = f;
}
public NumericLocalDateRangeQueryParser(final String dateFormat,final String f, Analyzer a) {
super(f, a);
this.f = f;
this.dateFormat = dateFormat;
logger.debug("date formate: {}", ()->dateFormat);
}
//check a field if found, have to set to -1
#Override
protected Query newFieldQuery(Analyzer analyzer, String field, String queryText, boolean quoted) throws ParseException {
if (f.equals(field)) {
try {
return NumericRangeQuery.newLongRange(
field,
stringToTime(queryText).toEpochDay(), stringToTime(queryText).toEpochDay(),
true,
true
);
} catch (final DateTimeParseException ex) {
return super.newFieldQuery(analyzer, field, queryText, quoted);
}
}
return super.newFieldQuery(analyzer, field, queryText, quoted);
}
/**
*
* #param field = filed when indexing
* #param part1 = date 1 e.g. date 1 to date 2 in string
* #param part2 = date 2
* #param startInclusive
* #param endInclusive
* #return
*/
#Override
protected Query newRangeQuery(final String field, final String part1, final String part2,
final boolean startInclusive, final boolean endInclusive) {
if (f.equals(field)) {
try {
return NumericRangeQuery.newLongRange(
field,
stringToTime(part1).toEpochDay(), stringToTime(part2).toEpochDay(),
true,
true
);
} catch (final DateTimeParseException ex) {
return NumericRangeQuery.newLongRange(field, DEFAULT_DATE, DEFAULT_DATE, true, true);
}
} else {
return super.newRangeQuery(field, part1, part2, startInclusive, endInclusive);
}
}
#Override
protected org.apache.lucene.search.Query newTermQuery(final Term term) {
if (term.field().equals(f)) {
try {
return NumericRangeQuery.newLongRange(term.field(),
stringToTime(term.text()).toEpochDay(), stringToTime(term.text()).toEpochDay(), true, true);
} catch (final DateTimeParseException ex) {
logger.debug("it's not numeric: {}", () -> ex.getMessage());
return NumericRangeQuery.newLongRange(field, DEFAULT_DATE,DEFAULT_DATE, true, true);
}
} else {
logger.debug("normal query term");
return super.newTermQuery(term);
}
}
private LocalDate stringToTime(final String date) throws DateTimeParseException {
final DateTimeFormatter formatter = DateTimeFormatter.ofPattern(dateFormat);
return LocalDate.parse(date, formatter);
}
}
First, on the mapping side, you'll just need this:
#GenericField
private LocalDateTime createDate;
Second, the query. If you really want to write native queries and skip the whole Search DSL, I suppose you have your reasons. Would you mind sharing them in a comment? Maybe it'll give me some ideas for improvements in Hibernate Search.
Regardless, the underlying queries changed a lot between Lucene 5 and 8. You can find how we query long-based fields (such as LocalDateTime) here, and how we convert a LocalDateTime to a long here.
So, something like this should work:
long part1AsLong = stringToTime(part1).toInstant(ZoneOffset.UTC).toEpochMilli();
long part2AsLong = stringToTime(part2).toInstant(ZoneOffset.UTC).toEpochMilli();
Query query = LongPoint.newRangeQuery(field, part1AsLong, part2AsLong);
Alternatively, if you can rely on the Search DSL, you can do this:
SearchSession searchSession = Search.session(entityManager);
SearchScope<MyEntity> scope = searchSession.scope(MyEntity.class);
// Pass the scope to your query parser somehow
MyQueryParser parser = new MyQueryParser(..., scope);
// Then in your parser, do this to create a range query on a `LocalDateTime` field:
LocalDateTime part1AsDateTime = stringToTime(part1);
LocalDateTime part2AsDateTime = stringToTime(part2);
Query query = LuceneMigrationUtils.toLuceneQuery(scope.predicate().range()
.field(field)
.between(part1AsDateTime, part2AsDateTime)
.toPredicate());
Note however that LuceneMigrationUtils is SPI, and as such it could change or be removed in a later version. If you think it's useful, we could expose it as API in a future version, so that it's guaranteed to stay there.
I suspect we could address your problem better by adding something to Hibernate Search, though. Why exactly do you need to rely on a query parser?
Here are my codes (modified from previous version):
public class LongPointLocalDateTimeRangeQueryParser extends QueryParser {
private static final Logger logger = LogManager.getLogger();
private String f;
private static final Long DEFAULT_DATE = -1L;
private String dateFormat;
public LongPointLocalDateTimeRangeQueryParser(final String f, final Analyzer a) {
super(f, a);
this.f = f;
}
public LongPointLocalDateTimeRangeQueryParser(final String dateFormat, final String f, Analyzer a) {
super(f, a);
this.f = f;
this.dateFormat = dateFormat;
}
#Override
protected Query newFieldQuery(Analyzer analyzer, String field, String queryText, boolean quoted) throws ParseException {
if (f.equals(field)) {
logger.debug("newFieldQuery, with field: {}, queryText: {}, quoted: {}", () -> f, () -> queryText, () -> quoted);
try {
return LongPoint.newRangeQuery(
field,
stringToTime(queryText).toInstant(ZoneOffset.UTC).toEpochMilli(), stringToTime(queryText).plusDays(1).toInstant(ZoneOffset.UTC).toEpochMilli()
);
} catch (final DateTimeParseException ex) {
logger.debug("it's not date format, error: {}", () -> ex.getMessage());
return super.newFieldQuery(analyzer, field, queryText, quoted);
//return null;
}
}
logger.debug("newFieldQuery, normal, queryText: {}, quoted: {}", () -> queryText, () -> quoted);
return super.newFieldQuery(analyzer, field, queryText, quoted); //To change body of generated methods, choose Tools | Templates.
}
/**
*
* #param field = filed when indexing
* #param part1 = date 1 in string
* #param part2 = date 2
* #param startInclusive
* #param endInclusive
* #return
*/
#Override
protected Query newRangeQuery(final String field, final String part1, final String part2,
final boolean startInclusive, final boolean endInclusive) {
if (f.equals(field)) {
try {
logger.debug("date 1: {}, str: {}", () -> stringToTime(part1).toInstant(ZoneOffset.UTC).toEpochMilli(), () -> part1);
logger.debug("date 2: {}, str: {}", () -> stringToTime(part2).plusDays(1).toInstant(ZoneOffset.UTC).toEpochMilli(), () -> part2);
return LongPoint.newRangeQuery(
field,
stringToTime(part1).toInstant(ZoneOffset.UTC).toEpochMilli(), stringToTime(part2).plusDays(1).toInstant(ZoneOffset.UTC).toEpochMilli()
);
} catch (final DateTimeParseException ex) {
logger.debug("it's not date format, error: {}", () -> ex.getMessage());
return LongPoint.newRangeQuery(field, DEFAULT_DATE, DEFAULT_DATE);
}
} else {
logger.debug("normal query range");
return super.newRangeQuery(field, part1, part2, startInclusive, endInclusive);
}
}
private LocalDateTime stringToTime(final String date) throws DateTimeParseException {
//... same as previous posted
}
}
And this is query parser
//...
final SearchQuery<POSProcessInventory> result = searchSession.search(POSProcessInventory.class).extension(LuceneExtension.get())
.where(f -> f.bool(b -> {
b.must(f.bool(b1 -> {
//...
try {
if (searchWord.contains("createDate:")) {
logger.info("doing queryParser for LocalDateTime: {}", () -> searchWord);
b1.should(f.fromLuceneQuery(queryLocalDateTime("createDate", searchWord)));
}
} catch (ParseException ex) {
logger.error("#3 this is not localDateTime");
}
And another method
private org.apache.lucene.search.Query queryLocalDateTime(final String field, final String dateTime)
throws org.apache.lucene.queryparser.classic.ParseException {
final LongPointLocalDateTimeRangeQueryParser createDateQ = new LongPointLocalDateTimeRangeQueryParser(accessCompanyInfo.getDateTimeFormat().substring(0, 10), field, new KeywordAnalyzer());
createDateQ.setAllowLeadingWildcard(false);
final org.apache.lucene.search.Query queryLocalDate = createDateQ.parse(dateTime);
logger.debug(field + "query field: {} query Str: {}", () -> field, () -> queryLocalDate);
return queryLocalDate;
}
I have an enum defined in Scala class as follows
// define compression types as enumerator
object CompressionType extends Enumeration
{
type CompressionType = Value
val None, Gzip, Snappy, Lz4, Zstd = Value
}
and I have class that I want to Serialize in JSON
case class ProducerConfig(batchNumMessages : Int, lingerMs : Int, messageSize : Int,
topic: String, compressionType: CompressionType.Value )
That class includes the Enum object. It seems that using GSON to serialize causes StackOverflow due to some circular dependency.
val gson = new Gson
val jsonBody = gson.toJson(producerConfig)
println(jsonBody)
Here is the stack trace I get below. I saw this question here and answer except the solution seems to be Java solution and didn't work for scala. Can someone clarify?
17:10:04.475 [ERROR] i.g.a.Gatling$ - Run crashed
java.lang.StackOverflowError: null
at com.google.gson.stream.JsonWriter.beforeName(JsonWriter.java:617)
at com.google.gson.stream.JsonWriter.writeDeferredName(JsonWriter.java:400)
at com.google.gson.stream.JsonWriter.value(JsonWriter.java:526)
at com.google.gson.internal.bind.TypeAdapters$7.write(TypeAdapters.java:233)
at com.google.gson.internal.bind.TypeAdapters$7.write(TypeAdapters.java:218)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245)
I'm not a Scala guy but I think Gson is a wrong tool to use here.
Firstly, Gson is not aware of scala.Enumeration therefore handling it as a regular data bag that's traversable using reflection.
Secondly, there is no an easy (if any?) way of deserializing to the original value state (can be ignored if you're going only to produce, not consume, JSON documents).
Here is why:
object Single
extends Enumeration {
val Only = Value
}
final class Internals {
private Internals() {
}
static void inspect(final Object o, final Excluder excluder, final boolean serialize)
throws IllegalAccessException {
inspect(o, clazz -> !excluder.excludeClass(clazz, serialize), field -> !excluder.excludeField(field, serialize));
}
static void inspect(final Object o, final Predicate<? super Class<?>> inspectClass, final Predicate<? super Field> inspectField)
throws IllegalAccessException {
for ( Class<?> c = o.getClass(); c != null; c = c.getSuperclass() ) {
if ( !inspectClass.test(c) ) {
continue;
}
System.out.println(c);
for ( final Field f : c.getDeclaredFields() ) {
if ( !inspectField.test(f) ) {
continue;
}
f.setAccessible(true);
System.out.printf("\t%s: %s\n", f, f.get(o));
}
}
}
}
final Object value = Single.Only();
Internals.inspect(value, gson.excluder(), true);
produces:
class scala.Enumeration$Val
private final int scala.Enumeration$Val.i: 0
private final java.lang.String scala.Enumeration$Val.name: null
class scala.Enumeration$Value
private final scala.Enumeration scala.Enumeration$Value.scala$Enumeration$$outerEnum: Single
class java.lang.Object
As you can see, there are two crucial fields:
private final java.lang.String scala.Enumeration$Val.name gives null unless named (the enumeration element can be obtained using toString though).
private final scala.Enumeration scala.Enumeration$Value.scala$Enumeration$$outerEnum is actually a reference to the concrete enumeration outer class (that's actually the cause of the infinite recursion and hence stack overflow error).
These two prevent from proper deserialization.
The outer enum type can be obtained in at least three ways:
either implement custom type adapters for all types that can contain such enumerations (pretty easy for data bags (case classes in Scala?) as fields already contain the type information despite Gson provides poor support of this; won't work for single primitive literals like the above or collections);
or bake the outer enumeration name to JSON holding two entries for the name and outer type.
The latter could be done like this (in Java, hope it's easy to simplify it in Scala):
final class ScalaStuff {
private static final Field outerEnumField;
private static final Map<String, Method> withNameMethodCache = new ConcurrentHashMap<>();
static {
try {
outerEnumField = Enumeration.Value.class.getDeclaredField("scala$Enumeration$$outerEnum");
outerEnumField.setAccessible(true);
} catch ( final NoSuchFieldException ex ) {
throw new RuntimeException(ex);
}
}
private ScalaStuff() {
}
#Nonnull
static String toEnumerationName(#Nonnull final Enumeration.Value value) {
try {
final Class<? extends Enumeration> aClass = ((Enumeration) outerEnumField.get(value)).getClass();
final String typeName = aClass.getTypeName();
final int length = typeName.length();
assert !typeName.isEmpty() && typeName.charAt(length - 1) == '$';
return typeName.substring(0, length - 1);
} catch ( final IllegalAccessException ex ) {
throw new RuntimeException(ex);
}
}
#Nonnull
static Enumeration.Value fromEnumerationValue(#Nonnull final String type, #Nonnull final String enumerationName)
throws ClassNotFoundException, NoSuchMethodException {
// using get for exception propagation cleanliness; computeIfAbsent would complicate exception handling
#Nullable
final Method withNameMethodCandidate = withNameMethodCache.get(type);
final Method withNameMethod;
if ( withNameMethodCandidate != null ) {
withNameMethod = withNameMethodCandidate;
} else {
final Class<?> enumerationClass = Class.forName(type);
withNameMethod = enumerationClass.getMethod("withName", String.class);
withNameMethodCache.put(type, withNameMethod);
}
try {
return (Enumeration.Value) withNameMethod.invoke(null, enumerationName);
} catch ( final IllegalAccessException | InvocationTargetException ex ) {
throw new RuntimeException(ex);
}
}
}
final class ScalaEnumerationTypeAdapterFactory
implements TypeAdapterFactory {
private static final TypeAdapterFactory instance = new ScalaEnumerationTypeAdapterFactory();
private ScalaEnumerationTypeAdapterFactory() {
}
static TypeAdapterFactory getInstance() {
return instance;
}
#Override
#Nullable
public <T> TypeAdapter<T> create(final Gson gson, final TypeToken<T> typeToken) {
if ( !Enumeration.Value.class.isAssignableFrom(typeToken.getRawType()) ) {
return null;
}
#SuppressWarnings("unchecked")
final TypeAdapter<T> typeAdapter = (TypeAdapter<T>) Adapter.instance;
return typeAdapter;
}
private static final class Adapter
extends TypeAdapter<Enumeration.Value> {
private static final TypeAdapter<Enumeration.Value> instance = new Adapter()
.nullSafe();
private Adapter() {
}
#Override
public void write(final JsonWriter out, final Enumeration.Value value)
throws IOException {
out.beginObject();
out.name("type");
out.value(ScalaStuff.toEnumerationName(value));
out.name("name");
out.value(value.toString());
out.endObject();
}
#Override
public Enumeration.Value read(final JsonReader in)
throws IOException {
in.beginObject();
#Nullable
String type = null;
#Nullable
String name = null;
while ( in.hasNext() ) {
switch ( in.nextName() ) {
case "type":
type = in.nextString();
break;
case "name":
name = in.nextString();
break;
default:
in.skipValue();
break;
}
}
in.endObject();
if ( type == null || name == null ) {
throw new JsonParseException("Insufficient enum data: " + type + ", " + name);
}
try {
return ScalaStuff.fromEnumerationValue(type, name);
} catch ( final ClassNotFoundException | NoSuchMethodException ex ) {
throw new JsonParseException(ex);
}
}
}
}
The following JUnit 5 test will passed:
private static final Gson gson = new GsonBuilder()
.disableHtmlEscaping()
.registerTypeAdapterFactory(ScalaEnumerationTypeAdapterFactory.getInstance())
.create();
#Test
public void test() {
final Enumeration.Value before = Single.Only();
final String json = gson.toJson(before);
System.out.println(json);
final Enumeration.Value after = gson.fromJson(json, Enumeration.Value.class);
Assertions.assertSame(before, after);
}
where the json variable would hold the following JSON payload:
{"type":"Single","name":"Only"}
The ScalaStuff class above is most likely not complete. See more at how to deserialize a json string that contains ## with scala' for Scala and Gson implications.
Update 1
Since you don't need to consume the produced JSON documents assuming the JSON consumers can deal with the enumeration deserialization themselves, you can produce an enumeration value name that's more descriptive than producing nameless ints. Just replace the Adapter above:
private static final class Adapter
extends TypeAdapter<Enumeration.Value> {
private static final TypeAdapter<Enumeration.Value> instance = new Adapter()
.nullSafe();
private Adapter() {
}
#Override
public void write(final JsonWriter out, final Enumeration.Value value)
throws IOException {
out.value(value.toString());
}
#Override
public Enumeration.Value read(final JsonReader in) {
throw new UnsupportedOperationException();
}
}
Then following test will be green:
Assertions.assertEquals("\"Only\"", gson.toJson(Single.Only()));
I am having trouble figuring out how to pass a parameter to a query. In this case, I want to get a user by name field. I know I can pass an id, but how do I pass another field? Do I need to create a secondary index?
private AWSAppSyncClient mAWSAppSyncClient;
mAWSAppSyncClient.query(GetUserQuery.builder().build())
.responseFetcher(AppSyncResponseFetchers.CACHE_AND_NETWORK)
.enqueue(userCallback);
query GetUser($id: ID!) {
getUser(id: $id) {
id
userId
name
...
}
}
Use ListUsers (ListXXXX) to query with multiple parameters.
Example - query by name = "aaa":
ModelStringFilterInput modelStringFilterInput = ModelStringFilterInput.builder().eq("aaa").build();
ModelUserFilterInput modelUserFilterInput = ModelUserFilterInput.builder().name(modelStringFilterInput).build();
mAWSAppSyncClient.query(ListUsersQuery.builder().filter(modelUserFilterInput).build())
.responseFetcher(AppSyncResponseFetchers.CACHE_AND_NETWORK)
.enqueue(userCallback);
private GraphQLCall.Callback<ListUsersQuery.Data> userCallback = new GraphQLCall.Callback<ListUsersQuery.Data>() {
#Override
public void onResponse(#Nonnull Response<ListUsersQuery.Data> response) {
Log.d(TAG, response.data().listUsers().items().toString());
}
#Override
public void onFailure(#Nonnull ApolloException e) {
Log.d(TAG, e.toString());
}
};
Went to upgrade to Retrofit 2.0 and running into this weird problem.
I have a method to log a user in
public interface ApiInterface {
#Multipart
#POST("user/login/")
Call<SessionToken> userLogin(#Part("username") String username, #Part("password") String password);
}
When I look at the key value POST params on the server side they print like this
username : "brian"
password : "password"
The same method using retrofit 1.9 the K:V pairs look like
username : brian
password : password
It's adding literal quotes to the POST variables
If I use any other rest client the variables print like the second way without the quotes.
Here is how I build the Retrofit instance with an interceptor
OkHttpClient client = new OkHttpClient();
client.interceptors().add(new Interceptor() {
#Override
public Response intercept(Chain chain) throws IOException {
Request original = chain.request();
// Customize the request
Request request = original.newBuilder()
.header("Accept", "application/json")
.header("Authorization", myPrefs.accessToken().getOr(""))
.method(original.method(), original.body())
.build();
Response response = chain.proceed(request);
// Customize or return the response
return response;
}
});
Ok2Curl.set(client);
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(apiEndpoint)
.addConverterFactory(GsonConverterFactory.create())
.client(client)
.build();
I imagine i'm doing something wrong with the converter but not sure what.
Has anyone else ran into this problem yet? I know its in beta but it's pretty widly used.
This is because it's running through the JSON converter.
Solution1:
use RequestBody instead of String
public interface ApiInterface {
#Multipart
#POST("user/login/")
Call<SessionToken> userLogin(#Part("username") RequestBody username, #Part("password") RequestBody password);
}
Build RequestBody:
RequestBody usernameBody = RequestBody.create(MediaType.parse("text/plain"), usernameStr);
RequestBody passwordBody = RequestBody.create(MediaType.parse("text/plain"), passwordStr);
Launch network operation:
retrofit.create(ApiInterface.class).userLogin(usernameBody , passwordBody).enqueue()....
Solution2: Create a custom ConverterFactory to dispose String part value.
For: Retrofit2 final release not beta. (com.squareup.retrofit2:retrofit:2.0.0)
Create your StringConverterFactory:
public class StringConverterFactory extends Converter.Factory {
private static final MediaType MEDIA_TYPE = MediaType.parse("text/plain");
public static StringConverterFactory create() {
return new StringConverterFactory();
}
#Override
public Converter<ResponseBody, ?> responseBodyConverter(Type type, Annotation[] annotations, Retrofit retrofit) {
if (String.class.equals(type)) {
return new Converter<ResponseBody, String>() {
#Override
public String convert(ResponseBody value) throws IOException {
return value.string();
}
};
}
return null;
}
#Override
public Converter<?, RequestBody> requestBodyConverter(Type type, Annotation[] parameterAnnotations, Annotation[] methodAnnotations, Retrofit retrofit) {
if(String.class.equals(type)) {
return new Converter<String, RequestBody>() {
#Override
public RequestBody convert(String value) throws IOException {
return RequestBody.create(MEDIA_TYPE, value);
}
};
}
return null;
}
}
Add to your retrofit instance:
retrofit = new Retrofit.Builder()
.baseUrl(SERVER_URL)
.client(client)
.addConverterFactory(StringConverterFactory.create())
.addConverterFactory(GsonConverterFactory.create())
.addCallAdapterFactory(RxJavaCallAdapterFactory.create())
.build();
Attention: StringConverterFactory should add before GsonConverterFactory!
then you can use String as part value directly.
You can find more information about this issue in https://github.com/square/retrofit/issues/1210
I have the same problem, and how it solved:
1) Add to build.gradle:
compile 'com.squareup.retrofit2:converter-scalars:2.1.0' // Remember to add the same version
2) Add one line here:
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(URL_BASE)
.addConverterFactory(ScalarsConverterFactory.create()) // this line
.addConverterFactory(GsonConverterFactory.create(gson))
.client(getUnsafeOkHttpClient())
.build();
What about to do in that way?
RequestBody caption = RequestBody.create(MediaType.parse("text/plain"), new String("caption"));
Here is how to resolve it,
Firstly:
return new Retrofit.Builder()
.baseUrl(Env.GetApiBaseUrl())
.addConverterFactory(new GsonStringConverterFactory())
.addConverterFactory(GsonConverterFactory.create(gson))
.client(getHttpClient())
.build();
Create a CustomConverter like this one, this is needed by Retrofit 2, unless some fix the "feature" added in v2.
public class GsonStringConverterFactory extends Converter.Factory {
private static final MediaType MEDIA_TYPE = MediaType.parse("text/plain");
#Override
public Converter<?, RequestBody> toRequestBody(Type type, Annotation[] annotations) {
if (String.class.equals(type))// || (type instanceof Class && ((Class<?>) type).isEnum()))
{
return new Converter<String, RequestBody>() {
#Override
public RequestBody convert(String value) throws IOException {
return RequestBody.create(MEDIA_TYPE, value);
}
};
}
return null;
}
}
I've found another one solution except those. Worked with Retrofit 2.1.0. (Rx adapter is optional here)
My retrofit interface looks like this:
#POST("/children/add")
Observable<Child> addChild(#Body RequestBody requestBody);
And in ApiManager I use it like this:
#Override
public Observable<Child> addChild(String firstName, String lastName, Long birthDate, #Nullable File passportPicture) {
MultipartBody.Builder builder = new MultipartBody.Builder()
.setType(MultipartBody.FORM)
.addFormDataPart("first_name", firstName)
.addFormDataPart("last_name", lastName)
.addFormDataPart("birth_date", birthDate + "");
//some nullable optional parameter
if (passportPicture != null) {
builder.addFormDataPart("certificate", passportPicture.getName(), RequestBody.create(MediaType.parse("image/*"), passportPicture));
}
return api.addChild(builder.build());
}
It is similar to Solution1 from Loyea but I think that it's little a bit more elegant.
If your UI is showing your responses with quotes, you can use getAsString instead of toString
I don't know if it is too late, but we can also send requests with RequestBody.
Example:
public interface ApiInterface {
#Multipart
#POST("user/login/")
Call<SessionToken> userLogin(#Part("username") String username, #Part("password") String password);
}
We can convert as below:
public interface ApiInterface {
#Multipart
#POST("user/login/")
Call<SessionToken> userLogin(#Part("username") RequestBody username, #Part("password") String password);
}